The "next token prediction" is a distraction. That's not where the interesting part of an AI model happens.
If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.
Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.
In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.
From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.
I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:
- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception)
- Symbolic processing of the best LLM's.
- Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems
- Optionally: Optimized for the use of a few specific tools, including a humanoid robot.
Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.
>which is in many ways similar to our subconscious thinking
this is just made up.
- we don't have any useful insight on human subconscious thinking.
- we don't have any useful insight on the structures that support human subconscious thinking.
- the mechanisms that support human cognition that we do know about are radically different from the mechanisms that current models use. For example we know that biological neurons & synapses are structurally diverse, we know that suppression and control signals are used to change the behaviour of the networks , we know that chemical control layers (hormones) transform the state of the system.
We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.
Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!
Yea whenever we get into this sort of “what’s happening in the network is like what’s going on in your brain” discussion people never have concrete evidence of what they’re talking about.
The diversity is itself indicative, though, that intelligence isn't bound to the particularities of the human nervous system. Across different animal species, nervous systems show a radical diversity. Different architectures; different or reversed neurotransmitters; entirely different neural cell biologies. It's quite possible that "neurons" evolved twice, independently. There's nothing magic about the human brain.
Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.
Obviously there are still gaps in ML architectures compared to biological brains, but there's no particular reason to believe they're fundamental to existence in silico, as opposed to myelinated bags of neurotransmitters.
>The diversity is itself indicative, though, that intelligence isn't bound to the particularities of the human nervous system. Across different animal species, nervous systems show a radical diversity. Different architectures; different or reversed neurotransmitters; entirely different neural cell biologies. It's quite possible that "neurons" evolved twice, independently. There's nothing magic about the human brain.
I agree - for example Octopus's are clearly somewhat intelligent, maybe very intelligent, and they have a very different brain architecture. Bees have a form of collective intelligence that seems to be emergent from many brains working together. Human cognition could arguably be identified as having a socially emergent component as well.
>Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.
I think we can only reasonably talk about the technology as it exists. I agree that there is no justifiable reason (that I know of) to claim that biology is unique as a substrate for intelligence or agency or consciousness or cognition or minds in general. But the history of AI is littered with stories of communities believing that a few minor problems just needed to be tidied up before everything works.
> We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.
This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.
> Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!
To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.
>This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.
Training more and learning online are really different processes. In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.
>To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.
I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.
Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.
> In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.
Given it can learn from unordered text of the entire the internet, it can learn from chats.
> I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.
> Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.
Humans are very good at creating narratives about our minds, but in the cases where this can be tested, it is often found that our conscious experiences are preceded by other brain states in a predictable fashion, and that we confabulate explanations post-hoc.
So while I do not doubt that this is how it feels to be you, the very same lack of understanding of causal mechanisms within the human brain that makes it an error to confidently say that LLMs copy this behaviour, also mean we cannot truly be confident that the reasons we think we have for how we feel/think/learn/experience/remember are, in fact, the true reasons for how we feel/think/learn/experience/remember.
As far as I understood any AI model is just a linear combination of its training data. Even if that were such a large corpus as the entire web... it's still just like a sophisticated compression of other's people's expressions.
It has not made its own experiences, not interacted with the outer world. Dunno, I won't to rule out something operating solely on language artifacts cannot develop intelligence or consciousness, whatever that is,.. but so far there are also enough humans we could care about and invest into.
LLMs are not a linear combination of training data.
Some LLMs have interacted with the outside world, such as through reinforcement learning while trying to complete tasks in simulated physics environments.
If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.
Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.
In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.
From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.
I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:
- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception) - Symbolic processing of the best LLM's. - Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems - Optionally: Optimized for the use of a few specific tools, including a humanoid robot.
Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.