If AGI is human-equivalent for the purposes of developing a civilization, a collective of AGIs is at least as capable as humanity, plus it has AI advantages, so it’s much more capable than a single AGI instance, or any single human. This leads to ASI being often used synonymously with AGI lately (via individual vs. collective conflation). Such use of “ASI” might free up “AGI” for something closer to its original meaning, which didn’t carry the implication of human-equivalence. But this setup leaves the qualitatively-more-capable-than-humanity bucket without a label, that’s important for gesturing at AI danger.
I think the other extreme for meaning of “ASI”, being qualitatively much stronger than humanity, can be made more specific by having “ASI” refer to the level of capabilities that follows software-only singularity (under the assumption that it does advance capabilities a lot). This way, it’s neither literal technological maturity of hitting the limits of physical law, nor merely a collective of jagged-human-level AGI instances wielding their AI advantages. Maybe “RSI” is a more stable label for this, as in Superintelligence Strategy framing where “intelligence recursion” is the central destabilization bogeyman, rather than any given level of capabilities on its own.
What do you think of “Locked In AI (LIAI)” for when an AI becomes sufficiently capable that it’s preferences / utility function are “locked in” and can no longer be altered or avoided by other agents? This “locking in” is how I refer to the theoretical point of an RSI when it becomes too late to stop or alter course.
Also, for what it’s worth, I like “artificial general super intelligence (AGSI) which then frees up “AGI” for AI that does general reasoning and language at any level of capability, and hilariously frees up “ASI” to refer to any AI that does what it does better than any human, so a pocket calculator is an ASI because it does arithmetic better than any human. Though more confusing, LLMs would be ASI and not AGI because they are superhuman at text prediction, but chatbots made from LLMs would be AGI and not ASI because they reason and talk with general intelligence, but it seems more limited in some ways than human reasoning.
If AGI is human-equivalent for the purposes of developing a civilization, a collective of AGIs is at least as capable as humanity, plus it has AI advantages, so it’s much more capable than a single AGI instance, or any single human. This leads to ASI being often used synonymously with AGI lately (via individual vs. collective conflation). Such use of “ASI” might free up “AGI” for something closer to its original meaning, which didn’t carry the implication of human-equivalence. But this setup leaves the qualitatively-more-capable-than-humanity bucket without a label, that’s important for gesturing at AI danger.
I think the other extreme for meaning of “ASI”, being qualitatively much stronger than humanity, can be made more specific by having “ASI” refer to the level of capabilities that follows software-only singularity (under the assumption that it does advance capabilities a lot). This way, it’s neither literal technological maturity of hitting the limits of physical law, nor merely a collective of jagged-human-level AGI instances wielding their AI advantages. Maybe “RSI” is a more stable label for this, as in Superintelligence Strategy framing where “intelligence recursion” is the central destabilization bogeyman, rather than any given level of capabilities on its own.
What do you think of “Locked In AI (LIAI)” for when an AI becomes sufficiently capable that it’s preferences / utility function are “locked in” and can no longer be altered or avoided by other agents? This “locking in” is how I refer to the theoretical point of an RSI when it becomes too late to stop or alter course.
Also, for what it’s worth, I like “artificial general super intelligence (AGSI) which then frees up “AGI” for AI that does general reasoning and language at any level of capability, and hilariously frees up “ASI” to refer to any AI that does what it does better than any human, so a pocket calculator is an ASI because it does arithmetic better than any human. Though more confusing, LLMs would be ASI and not AGI because they are superhuman at text prediction, but chatbots made from LLMs would be AGI and not ASI because they reason and talk with general intelligence, but it seems more limited in some ways than human reasoning.