I don’t think of LLMs like GPT3 as agents that uses language; they are artificial linguistic cortices which can be useful to brains as (external or internal) tools.
I imagine that a more ‘true’ AGI system will be somewhat brain-like in that it will develop a linguistic cortex purely through embedded active learning in a social environment, but is much more than just that one module—even if that particular module is the key enabler for human-like intelligence as distinct from animal intelligence.
That’s certainly true, but it seems like currently an unsolved problem how to make sim-grown agents that learn a language from scratch.
I find this statement puzzling because it is rather obvious how to build sim-grown agents that learn language from scratch. You simply need to replicate something like a human child’s development environment and train a sufficiently powerful/general model there. This probably requires a sim where the child is immersed in adults conversing with it and themselves, a sufficiently complex action space, etc. That probably hasn’t been done yet partly because nobody has bothered to try (10 years of data at least, perhaps via a thousand volunteers contributing a couple weeks?) and also perhaps because current systems don’t have the capacity/capability to learn that quickly enough for various reasons.
The game/sim path to AGI—which is more or less deepmind’s traditional approach—probably goes through animal-like intelligence first, and arguably things like VPT are already getting close. That of course is not the only path: there’s also a prosaic GPT3 style path where you build out individual specialized modules first and then gradually integrate them.
I don’t think of LLMs like GPT3 as agents that uses language; they are artificial linguistic cortices which can be useful to brains as (external or internal) tools.
I imagine that a more ‘true’ AGI system will be somewhat brain-like in that it will develop a linguistic cortex purely through embedded active learning in a social environment, but is much more than just that one module—even if that particular module is the key enabler for human-like intelligence as distinct from animal intelligence.
I find this statement puzzling because it is rather obvious how to build sim-grown agents that learn language from scratch. You simply need to replicate something like a human child’s development environment and train a sufficiently powerful/general model there. This probably requires a sim where the child is immersed in adults conversing with it and themselves, a sufficiently complex action space, etc. That probably hasn’t been done yet partly because nobody has bothered to try (10 years of data at least, perhaps via a thousand volunteers contributing a couple weeks?) and also perhaps because current systems don’t have the capacity/capability to learn that quickly enough for various reasons.
The game/sim path to AGI—which is more or less deepmind’s traditional approach—probably goes through animal-like intelligence first, and arguably things like VPT are already getting close. That of course is not the only path: there’s also a prosaic GPT3 style path where you build out individual specialized modules first and then gradually integrate them.