Thank you for writing this. I have been making the same argument for about two years now, but you have argued the case better here than I could have. As you note in your edit it is possible for goal posts to be purposefully moved, but this irks me for a number of reasons beyond mere obstinacy:
The transition from narrow AI to truly general AI is socially transformative, and we are living through that transition right now. We should be having a conversation about this, but are being hindered from doing so because the very concept of Artificial General Intelligence has been co-opted.
The confusion originates I think from the belief by many people in the pre-GPT era that achieving general intelligence is all that is required to kick off the singularity. GPT demonstrates quite clearly that this belief is false. This doesn’t mean the foomers/doomers are wrong to be worried about AI, but it is a glaring hole in the standard arguments for their position, and should be talked about more, but confusion over terminology is preventing that from happening.
Moving goalposts to define AGI as radically transformative and/or superhuman capabilities is begging the question. To say that we haven’t achieved AGI because modern AI hasn’t literally taken over the world and/or killed all humans is to assume that unaligned AI would necessarily lead to such outcomes. Pre-2017 AI x-risk people did routinely argue that even a middling-level artificial general intelligence would be able to enter a recursive self-improvement cycle and reach superhuman capabilities in short order. Although I have no insider info, I believe this line of thinking is what led to EY’s public meltdown a year or so ago. I disagree with him, but I respect that he took his line of thinking to its logical conclusion and accepted the consequences. Most of the rationalist community has not updated on the evidence of GPT being AGI as EY has, and I think this goalpost moving has a lot to do with that. Be intellectually honest!
The AI x-risk community claimed that the sky was falling, that the development of AGI would end the human race. Well, we’re now 2-7 years out from the birth of AGI (depending on which milestone you choose), and SkyNet scenarios seem no closer to fruition. If the x-risk community wants to be taken seriously, they need to confront this contradiction head-on and not just shift definitions to avoid hard questions.
Thank you for writing this. I have been making the same argument for about two years now, but you have argued the case better here than I could have. As you note in your edit it is possible for goal posts to be purposefully moved, but this irks me for a number of reasons beyond mere obstinacy:
The transition from narrow AI to truly general AI is socially transformative, and we are living through that transition right now. We should be having a conversation about this, but are being hindered from doing so because the very concept of Artificial General Intelligence has been co-opted.
The confusion originates I think from the belief by many people in the pre-GPT era that achieving general intelligence is all that is required to kick off the singularity. GPT demonstrates quite clearly that this belief is false. This doesn’t mean the foomers/doomers are wrong to be worried about AI, but it is a glaring hole in the standard arguments for their position, and should be talked about more, but confusion over terminology is preventing that from happening.
Moving goalposts to define AGI as radically transformative and/or superhuman capabilities is begging the question. To say that we haven’t achieved AGI because modern AI hasn’t literally taken over the world and/or killed all humans is to assume that unaligned AI would necessarily lead to such outcomes. Pre-2017 AI x-risk people did routinely argue that even a middling-level artificial general intelligence would be able to enter a recursive self-improvement cycle and reach superhuman capabilities in short order. Although I have no insider info, I believe this line of thinking is what led to EY’s public meltdown a year or so ago. I disagree with him, but I respect that he took his line of thinking to its logical conclusion and accepted the consequences. Most of the rationalist community has not updated on the evidence of GPT being AGI as EY has, and I think this goalpost moving has a lot to do with that. Be intellectually honest!
The AI x-risk community claimed that the sky was falling, that the development of AGI would end the human race. Well, we’re now 2-7 years out from the birth of AGI (depending on which milestone you choose), and SkyNet scenarios seem no closer to fruition. If the x-risk community wants to be taken seriously, they need to confront this contradiction head-on and not just shift definitions to avoid hard questions.