‘Superintelligence’ seems more fitting than AGI for the ‘transformative’ scope. The problem with “transformative AI” as a term is that subdomain transformation will occur at staggered rates. We saw text based generation reach thresholds that it took several years to reach for video just recently, as an example.
I don’t love ‘superintelligence’ as a term, and even less as a goal post (I’d much rather be in a world aiming for AI ‘superwisdom’), but of the commonly used terms it seems the best fit for what people are trying to describe when they describe an AI generalized and sophisticated enough to be “at or above maximal human competency in most things.”
The OP post, at least to me, seems correct in that AGI as a term belongs to its foundations as a differentiator from narrow scoped competencies in AI, and that the lines for generalization are sufficiently blurred at this point with transformers we should stop moving the goal posts for the ‘G’ in AGI. And at least from what I’ve seen, there’s active harm in the industry where ‘AGI’ as some far future development leads people less up to date with research on things like world models or prompting to conclude that GPTs are “just Markov predictions” (overlooking the importance of the self-attention mechanism and the surprising results of its presence on the degree of generalization).
I would wager the vast majority of consumers of models underestimate the generalization present because in addition to their naive usage of outdated free models they’ve been reading article after article about how it’s “not AGI” and is “just fancy autocomplete” (reflecting a separate phenomenon where it seems professional writers are more inclined to write negative articles about a technology perceived as a threat to writing jobs than positive articles).
As this topic becomes more important, it might be useful for democracies to have a more accurately informed broader public, and AGI as a moving goal post seems counterproductive to those aims.
To me, superintelligence implies qualitatively much smarter than the best humans. I don’t think this is needed for AI to be transformative. Fast and cheap-to-run AIs which are as qualitatively smart as humans would likely be transformative.
‘Superintelligence’ seems more fitting than AGI for the ‘transformative’ scope. The problem with “transformative AI” as a term is that subdomain transformation will occur at staggered rates. We saw text based generation reach thresholds that it took several years to reach for video just recently, as an example.
I don’t love ‘superintelligence’ as a term, and even less as a goal post (I’d much rather be in a world aiming for AI ‘superwisdom’), but of the commonly used terms it seems the best fit for what people are trying to describe when they describe an AI generalized and sophisticated enough to be “at or above maximal human competency in most things.”
The OP post, at least to me, seems correct in that AGI as a term belongs to its foundations as a differentiator from narrow scoped competencies in AI, and that the lines for generalization are sufficiently blurred at this point with transformers we should stop moving the goal posts for the ‘G’ in AGI. And at least from what I’ve seen, there’s active harm in the industry where ‘AGI’ as some far future development leads people less up to date with research on things like world models or prompting to conclude that GPTs are “just Markov predictions” (overlooking the importance of the self-attention mechanism and the surprising results of its presence on the degree of generalization).
I would wager the vast majority of consumers of models underestimate the generalization present because in addition to their naive usage of outdated free models they’ve been reading article after article about how it’s “not AGI” and is “just fancy autocomplete” (reflecting a separate phenomenon where it seems professional writers are more inclined to write negative articles about a technology perceived as a threat to writing jobs than positive articles).
As this topic becomes more important, it might be useful for democracies to have a more accurately informed broader public, and AGI as a moving goal post seems counterproductive to those aims.
To me, superintelligence implies qualitatively much smarter than the best humans. I don’t think this is needed for AI to be transformative. Fast and cheap-to-run AIs which are as qualitatively smart as humans would likely be transformative.
Agreed—I thought you wanted that term for replacing how OP stated AGI is being used in relation to x-risk.
In terms of “fast and cheap and comparable to the average human”—well, then for a number of roles and niches we’re already there.
Sticking with the intent behind your term, maybe “generally transformative AI” is a more accurate representation for a colloquial ‘AGI’ replacement?
Oh, by “as qualitatively smart as humans” I meant “as qualitatively smart as the best human experts”.
I also maybe disagree with:
Or at least the % of economic activity covered by this still seems low to me.
Oh, by “as qualitatively smart as humans” I meant “as qualitatively smart as the best human experts”.
I think that is more comparable to saying “as smart as humanity.” No individual human is as smart as humanity in general.