On the contrary, if your AGI definition includes most humans, it sucks.
All the interesting stuff that humanity does is done by thing that most humans can’t do. What you call baby AGI is by itself not very relevant for any of the dangers about AGI discussed in e.g. superintelligence. You could quibble with the literal meaning of “general” or whatever but the historical associations with the term seem much more important to me. If people read years of how AGI will kill everyone and then you use the term AGI, obviously people will think you mean the thing with the properties they’ve read about.
Bottom line is, current AI is not the thing we were talking about under the label AGI for the last 15 years before LLMs, so we probably shouldn’t call it AGI.
The fact that this is how you’re clarifying it shows my point. While I’ve heard people talking about “AGI” (meaning weak ASI) having significant impacts, it’s seldom discussed as leading to x-risks. To give maybe the most famous recent example, AI 2027 ascribes the drastic changes and extinction specifically to ASI. Do you have examples of mind of credible people specifically talking about x-risks from AGI? Bostrom’s book refers to superintelligence, not general intelligence, as the superhuman AI.
Also, I find the argument that we should continue to use muddled terms to avoid changing their meaning not that compelling, at least when it’s easy to clarify meanings where necessary.
On the contrary, if your AGI definition includes most humans, it sucks.
All the interesting stuff that humanity does is done by thing that most humans can’t do. What you call baby AGI is by itself not very relevant for any of the dangers about AGI discussed in e.g. superintelligence. You could quibble with the literal meaning of “general” or whatever but the historical associations with the term seem much more important to me. If people read years of how AGI will kill everyone and then you use the term AGI, obviously people will think you mean the thing with the properties they’ve read about.
Bottom line is, current AI is not the thing we were talking about under the label AGI for the last 15 years before LLMs, so we probably shouldn’t call it AGI.
>AGI discussed in e.g. superintelligence
The fact that this is how you’re clarifying it shows my point. While I’ve heard people talking about “AGI” (meaning weak ASI) having significant impacts, it’s seldom discussed as leading to x-risks. To give maybe the most famous recent example, AI 2027 ascribes the drastic changes and extinction specifically to ASI. Do you have examples of mind of credible people specifically talking about x-risks from AGI? Bostrom’s book refers to superintelligence, not general intelligence, as the superhuman AI.
Also, I find the argument that we should continue to use muddled terms to avoid changing their meaning not that compelling, at least when it’s easy to clarify meanings where necessary.