I will persist in using “AGI” to describe the merely-quite-general AI of today, and use “ASI” for the really dangerous thing that can do almost anything better than humans can, unless you’d prefer to coordinate on some other terminology.
I don’t really like referring to The Thing as “ASI” (although I do it too), because I foresee us needing to rename it from that to “AGSI” eventually, same way we had to move from AI to AGI.
Specifically: I expect that AGI labs might start training their models to be superhuman at some very narrow tasks. It’s already possible in biology: genome modeling, protein engineering, and you can probably distill AlphaFold 2 into a sufficiently big LLM, etc. Once that starts happening, perhaps on some suite of tasks more adapted for going viral on Twitter, people will start running around saying that artificial superintelligence has been achieved. And indeed, in a literal sense, the chatbot would literally be able to generate some superhuman results and babble about them; and due to the fact that the distilled AlphaFold 2 (or whatever) will be crammed into a black-box LLM-chatbot wrapper, externally it will look as if the chatbot is a superintelligent reasoner. But in actuality, it may end up generally being as dumb as today’s LLMs, except in that narrow domain/domains where it effectively has access to a superhuman tool.
So at that point, we’ll have to move the goalposts to talking about the dangers of artificial general superintelligence, rather than a mere artificial (narrow) superintelligence. Some people will also be insisting that LLMs’ general intelligence is already at human-ish levels, so these LLMs, in their opinion, will already be both AGI and ASI, just not AGSI. That will indubitably have excellent effects on the clarity of discourse.
I think it’s near certain that I will be annoyed about all of this by 2028, so as a proper Bayesian, I’m already annoyed about this.
Tertiarily relevant annoyed rant on terminology:
I don’t really like referring to The Thing as “ASI” (although I do it too), because I foresee us needing to rename it from that to “AGSI” eventually, same way we had to move from AI to AGI.
Specifically: I expect that AGI labs might start training their models to be superhuman at some very narrow tasks. It’s already possible in biology: genome modeling, protein engineering, and you can probably distill AlphaFold 2 into a sufficiently big LLM, etc. Once that starts happening, perhaps on some suite of tasks more adapted for going viral on Twitter, people will start running around saying that artificial superintelligence has been achieved. And indeed, in a literal sense, the chatbot would literally be able to generate some superhuman results and babble about them; and due to the fact that the distilled AlphaFold 2 (or whatever) will be crammed into a black-box LLM-chatbot wrapper, externally it will look as if the chatbot is a superintelligent reasoner. But in actuality, it may end up generally being as dumb as today’s LLMs, except in that narrow domain/domains where it effectively has access to a superhuman tool.
So at that point, we’ll have to move the goalposts to talking about the dangers of artificial general superintelligence, rather than a mere artificial (narrow) superintelligence. Some people will also be insisting that LLMs’ general intelligence is already at human-ish levels, so these LLMs, in their opinion, will already be both AGI and ASI, just not AGSI. That will indubitably have excellent effects on the clarity of discourse.
I think it’s near certain that I will be annoyed about all of this by 2028, so as a proper Bayesian, I’m already annoyed about this.