Maybe we should think explicitly about what work is done by the concept of AGI, but I do not feel like calling GPT an AGI does anything interesting to my world model. Should I expect ChatGPT to beat me at chess? It’s next version? If not—is it due to shortage of data or compute? Will it take over the world? If not—may I conclude that the next AGI wouldn’t?
I understand why the bar-shifting thing look like motivated reasoning, and probably most of it actually is, but it deserves much more credit that you give it. We have an undefined concept of “something with virtually all the cognitive abilities of a human, that can therefore do whatever a human can”, and some dubious assumptions like “if it can sensibly talk about everything, it can probably understand everything”. Than we encounter ChatGPT, and it is amazing at speaking, except giving a strong impression of talking to an NPC. NPC who know lots of stuff and can even sort-of-reason in very constrained ways, do basic programming and be “creative” as in writing poetry—but is sub-human at things like gathering useful information, inferring people’s goals, etc. So we conclude that some cognitive ability is still missing, and try to think how to correct for that.
Now, I do not care to call GPT an AGI, but you will have to invent a name for the super-AGI things that we try to achieve next, and know to be possible because humans exist.
Maybe we should think explicitly about what work is done by the concept of AGI, but I do not feel like calling GPT an AGI does anything interesting to my world model. Should I expect ChatGPT to beat me at chess? It’s next version? If not—is it due to shortage of data or compute? Will it take over the world? If not—may I conclude that the next AGI wouldn’t?
I understand why the bar-shifting thing look like motivated reasoning, and probably most of it actually is, but it deserves much more credit that you give it. We have an undefined concept of “something with virtually all the cognitive abilities of a human, that can therefore do whatever a human can”, and some dubious assumptions like “if it can sensibly talk about everything, it can probably understand everything”. Than we encounter ChatGPT, and it is amazing at speaking, except giving a strong impression of talking to an NPC. NPC who know lots of stuff and can even sort-of-reason in very constrained ways, do basic programming and be “creative” as in writing poetry—but is sub-human at things like gathering useful information, inferring people’s goals, etc. So we conclude that some cognitive ability is still missing, and try to think how to correct for that.
Now, I do not care to call GPT an AGI, but you will have to invent a name for the super-AGI things that we try to achieve next, and know to be possible because humans exist.