I feel like the post proves too much: it gives arguments for why foom is unlikely, but I don’t see arguments which break the symmetry between “humans cannot foom relative to other animals” and “AI cannot foom relative to humans”.* For example, the statements
brains are already reasonably pareto-efficient
and
Intelligence requires/consumes compute in predictable ways, and progress is largely smooth.
seem irrelevant or false in light of the human-chimp example. (Are animal brains pareto-efficient? If not, I’m interested in what breaks the symmetry between humans and other animals. If yes, pareto-efficiency doesn’t seem that useful for making predictions on capabilities/foom.)
*One way to resolve the situation is by denying that humans foomed (in a sense relevant for AI), but this is not the route taken in the post.
Separately, I disagree with many claims and the overall thrust in the discussion of AlphaZero.
Go is extremely simple [...] This means that the Go predictive capability of a NN model as a function of NN size completely flatlines at an extremely small size.
This seems unlikely to me, depending on what “completely flatlines” and “extremely small size” mean.
Games like Go or chess are far too small for a vast NN like the brain, so the vast bulk of its great computational power is wasted.
Go and chess being small/simple doesn’t seem like the reason why ANNs are way better than brains there. Or, if it is, we should see the difference between ANNs and brains shrinking as the environment gets larger/more complex. This model doesn’t seem to lead to good predictions, though: Dota 2 is a lot more complicated than Go and chess, and yet we have superhuman performance there. Or how complicated exactly does a task need to be before ANNs and brains are equally good?
(Perhaps relatedly: There seems to be an implicit assumption that AGI will be an LLM. “The AGI we actually have simply reproduces [cognitive biases], because we train AI on human thoughts”. This is not obvious to me—what happened to RL?)
On a higher level, the whole train of reasoning reads like a just-so story to me: “We have obtained superhuman performance in Go, but this is only because of training on vastly more data and the environment being simple. As the task gets more complicated the brain becomes more competitive. And indeed, LLMs are close to but not quite human intelligences!”. I don’t see this is as a particularly good fit to the datapoints, or how this hypothesis is likelier than “There is room above human capabilities in ~every task, and we have achieved superhuman abilities in some tasks but not others (yet)”.
This model doesn’t seem to lead to good predictions, though: Dota 2 is a lot more complicated than Go and chess, and yet we have superhuman performance there. Or how complicated exactly does a task need to be before ANNs and brains are equally good?
My model predicts superhuman AGI in general—just that it uses and scales predictably with compute.
Dota 2 is only marginally more complicated than go/chess; the world model is still very very simple as it can be simulated perfectly using just a low end cpu core.
Or how complicated exactly does a task need to be before ANNs and brains are equally good?
Driving cars would be a good start. In terms of game worlds there is probably nothing remotely close, would need to be obviously 3D and very open ended with extremely complex physics and detailed realistic graphics, populated with humans and or advanced AI (I’ve been out of games for a while and i’m not sure what that game currently would be, but probably doesn’t exist yet).
I don’t see arguments which break the symmetry between “humans cannot foom relative to other animals” and “AI cannot foom relative to humans”
In the section “Seeking true Foom”, the post argues that the reason why humans foomed is because of culture, which none of the animals before us had. IMO, this invalidates the arguments in the first half of your comment (though not necessarily your conclusions).
I feel like the post proves too much: it gives arguments for why foom is unlikely, but I don’t see arguments which break the symmetry between “humans cannot foom relative to other animals” and “AI cannot foom relative to humans”.* For example, the statements
and
seem irrelevant or false in light of the human-chimp example. (Are animal brains pareto-efficient? If not, I’m interested in what breaks the symmetry between humans and other animals. If yes, pareto-efficiency doesn’t seem that useful for making predictions on capabilities/foom.)
*One way to resolve the situation is by denying that humans foomed (in a sense relevant for AI), but this is not the route taken in the post.
Separately, I disagree with many claims and the overall thrust in the discussion of AlphaZero.
This seems unlikely to me, depending on what “completely flatlines” and “extremely small size” mean.
Go and chess being small/simple doesn’t seem like the reason why ANNs are way better than brains there. Or, if it is, we should see the difference between ANNs and brains shrinking as the environment gets larger/more complex. This model doesn’t seem to lead to good predictions, though: Dota 2 is a lot more complicated than Go and chess, and yet we have superhuman performance there. Or how complicated exactly does a task need to be before ANNs and brains are equally good?
(Perhaps relatedly: There seems to be an implicit assumption that AGI will be an LLM. “The AGI we actually have simply reproduces [cognitive biases], because we train AI on human thoughts”. This is not obvious to me—what happened to RL?)
On a higher level, the whole train of reasoning reads like a just-so story to me: “We have obtained superhuman performance in Go, but this is only because of training on vastly more data and the environment being simple. As the task gets more complicated the brain becomes more competitive. And indeed, LLMs are close to but not quite human intelligences!”. I don’t see this is as a particularly good fit to the datapoints, or how this hypothesis is likelier than “There is room above human capabilities in ~every task, and we have achieved superhuman abilities in some tasks but not others (yet)”.
My model predicts superhuman AGI in general—just that it uses and scales predictably with compute.
Dota 2 is only marginally more complicated than go/chess; the world model is still very very simple as it can be simulated perfectly using just a low end cpu core.
Driving cars would be a good start. In terms of game worlds there is probably nothing remotely close, would need to be obviously 3D and very open ended with extremely complex physics and detailed realistic graphics, populated with humans and or advanced AI (I’ve been out of games for a while and i’m not sure what that game currently would be, but probably doesn’t exist yet).
In the section “Seeking true Foom”, the post argues that the reason why humans foomed is because of culture, which none of the animals before us had. IMO, this invalidates the arguments in the first half of your comment (though not necessarily your conclusions).