A while ago I wrote a post on why I think a “generality” concept can be usefully distinguished from an “intelligence” concept. Someone with a PhD is, I argue, not more general than a child, just more intelligent. Moreover, I would even argue that humans are a lot more intelligent than chimpanzees, but hardly more general. More broadly, animals seem to be highly general, just sometimes quite unintelligent.
For example, they (we) are able to do predictive coding: being able to predict future sensory inputs in real-time and react to them with movements, and learn from wrong predictions. This allows animals to be quite directly embedded in physical space and time (which solves “robotics”), instead of relying on a pretty specific and abstract API (like text tokens) that is not even real-time. Current autoregressive transformers can’t do that.
An intuition for this is the following: If we could make an artificial mouse-intelligence, we likely could, quite easily, scale this model to human-intelligence and beyond. Because the mouse brain doesn’t seem architecturally or functionally very different from a human brain. It’s just small. This suggests that mice are general intelligences (nonA-GIs) like us. They are just not very smart. Like a small language model that has the same architecture as a larger one.
A more subtle point: Predictive coding means learning from sensory data, and from trying to predict sensory data. The difference between predicting sensory data and human-written text is that the former are, pretty directly, created by the physical world, while existing text is constrained by how intelligent the humans were that wrote this text. So language models merely imitate humans via predicting their text, which leads to diminishing returns, while animals (humans) predict physical reality quite directly, which doesn’t have a similar ceiling. So scaling up a mouse-like AGI would likely quickly be followed by an ASI, while scaling up pretrained language models has lead to diminishing returns once their text gets as smart as the humans who wrote it, as diminishing results with Orion and other recent frontier base models have shown. Yes, scaling CoT reasoning is another approach to improve LLMs, but this is more like teaching a human how to think for longer rather than making them more intelligent.
A while ago I wrote a post on why I think a “generality” concept can be usefully distinguished from an “intelligence” concept. Someone with a PhD is, I argue, not more general than a child, just more intelligent. Moreover, I would even argue that humans are a lot more intelligent than chimpanzees, but hardly more general. More broadly, animals seem to be highly general, just sometimes quite unintelligent.
For example, they (we) are able to do predictive coding: being able to predict future sensory inputs in real-time and react to them with movements, and learn from wrong predictions. This allows animals to be quite directly embedded in physical space and time (which solves “robotics”), instead of relying on a pretty specific and abstract API (like text tokens) that is not even real-time. Current autoregressive transformers can’t do that.
An intuition for this is the following: If we could make an artificial mouse-intelligence, we likely could, quite easily, scale this model to human-intelligence and beyond. Because the mouse brain doesn’t seem architecturally or functionally very different from a human brain. It’s just small. This suggests that mice are general intelligences (nonA-GIs) like us. They are just not very smart. Like a small language model that has the same architecture as a larger one.
A more subtle point: Predictive coding means learning from sensory data, and from trying to predict sensory data. The difference between predicting sensory data and human-written text is that the former are, pretty directly, created by the physical world, while existing text is constrained by how intelligent the humans were that wrote this text. So language models merely imitate humans via predicting their text, which leads to diminishing returns, while animals (humans) predict physical reality quite directly, which doesn’t have a similar ceiling. So scaling up a mouse-like AGI would likely quickly be followed by an ASI, while scaling up pretrained language models has lead to diminishing returns once their text gets as smart as the humans who wrote it, as diminishing results with Orion and other recent frontier base models have shown. Yes, scaling CoT reasoning is another approach to improve LLMs, but this is more like teaching a human how to think for longer rather than making them more intelligent.