There’s an interesting Comment in Nature arguing that we should consider current systems AGI.
The term has largely lost its value at this point, just as the Turing test lost nearly all its value as we approached the point when it passed (because the closer we got, the more the answer depended on definitional details rather than questions about reality). I nonetheless found this particular piece on it worthwhile, because it considers and addresses a number of common objections.
Yes, we’re close enough that we now need to distinguish between lots of different sub-types of AGI. Some of these have already been achieved, some are not yet achieved, and some are debatable.
By my understanding of the term as originally intended, we now have AGI, though at the low end and with spiky capabilities. It’s getting much harder to find cognitive tasks that frontier systems cannot do out of the box, and I don’t think there are any known tasks that 1) most humans can do, and 2) the best current AI models definitely wouldn’t be able to do even if given time, access to all the tools that humans have access to, and the ability to develop their own frameworks and tools.
[Linkpost]
There’s an interesting Comment in Nature arguing that we should consider current systems AGI.
The term has largely lost its value at this point, just as the Turing test lost nearly all its value as we approached the point when it passed (because the closer we got, the more the answer depended on definitional details rather than questions about reality). I nonetheless found this particular piece on it worthwhile, because it considers and addresses a number of common objections.
Original (requires an account), Archived copy
Shane Legg (whose definition of AGI I generally use) disagrees on twitter with the authors.
Yes, we’re close enough that we now need to distinguish between lots of different sub-types of AGI. Some of these have already been achieved, some are not yet achieved, and some are debatable.
By my understanding of the term as originally intended, we now have AGI, though at the low end and with spiky capabilities. It’s getting much harder to find cognitive tasks that frontier systems cannot do out of the box, and I don’t think there are any known tasks that 1) most humans can do, and 2) the best current AI models definitely wouldn’t be able to do even if given time, access to all the tools that humans have access to, and the ability to develop their own frameworks and tools.