Yes, we’re close enough that we now need to distinguish between lots of different sub-types of AGI. Some of these have already been achieved, some are not yet achieved, and some are debatable.
By my understanding of the term as originally intended, we now have AGI, though at the low end and with spiky capabilities. It’s getting much harder to find cognitive tasks that frontier systems cannot do out of the box, and I don’t think there are any known tasks that 1) most humans can do, and 2) the best current AI models definitely wouldn’t be able to do even if given time, access to all the tools that humans have access to, and the ability to develop their own frameworks and tools.
Yes, we’re close enough that we now need to distinguish between lots of different sub-types of AGI. Some of these have already been achieved, some are not yet achieved, and some are debatable.
By my understanding of the term as originally intended, we now have AGI, though at the low end and with spiky capabilities. It’s getting much harder to find cognitive tasks that frontier systems cannot do out of the box, and I don’t think there are any known tasks that 1) most humans can do, and 2) the best current AI models definitely wouldn’t be able to do even if given time, access to all the tools that humans have access to, and the ability to develop their own frameworks and tools.