I get a sense “RSI” will start being used to mean continual learning or even just memory features in 2026, similarly to how there are currently attempts to dilute “ASI” to mean merely robust above-human-level competence. Thus recursively self-improving personal superintelligence becomes a normal technology through the power of framing. Communication can fail until the trees start boiling the oceans, when it becomes a matter of framing and ideology rather than isolated terminological disputes. That nothing ever changes is a well-established worldview, and it’s learning to talk about AI.
The end states of AI danger need terms to describe them. RSI proper is qualitative self-improvement, at least software-only singularity rather than merely learning from the current situation, automated training of new skills, keeping track of grocery preferences. And ASI proper is being qualitatively more capable than humanity, rather than a somewhat stronger cognitive peer with AI advantages, technology that takes everyone’s jobs.
Also worth remembering that (actual) RSI was never a necessary condition for ruin. It seems at least plausible that at some point, human AI researchers on their own will find methods of engineering an AGI to sufficiently superhuman levels, to the point where the AI is smart enough to start developing nanotech and / or socially engineering humans for bootstrapping needs.
So even if labs were carefully monitoring for RSI and trying to avoid it (rather than deliberately engineering for it + frog boiling in the meantime), an AI inclined to take over might find that it doesn’t even need to bother with potentially dicey self-modifications until after it has already secured victory.
Ability of learning from sensory data only (no raw text necessary, contrary to LLMs)
Active inference / predictive coding
Continual learning
At least that’s what animals (including humans) can do. Though there might then be system that intuitively qualifies as ASI but not as AGI. E.g. a very smart LLM.
I get a sense “RSI” will start being used to mean continual learning or even just memory features in 2026, similarly to how there are currently attempts to dilute “ASI” to mean merely robust above-human-level competence. Thus recursively self-improving personal superintelligence becomes a normal technology through the power of framing. Communication can fail until the trees start boiling the oceans, when it becomes a matter of framing and ideology rather than isolated terminological disputes. That nothing ever changes is a well-established worldview, and it’s learning to talk about AI.
The end states of AI danger need terms to describe them. RSI proper is qualitative self-improvement, at least software-only singularity rather than merely learning from the current situation, automated training of new skills, keeping track of grocery preferences. And ASI proper is being qualitatively more capable than humanity, rather than a somewhat stronger cognitive peer with AI advantages, technology that takes everyone’s jobs.
Also worth remembering that (actual) RSI was never a necessary condition for ruin. It seems at least plausible that at some point, human AI researchers on their own will find methods of engineering an AGI to sufficiently superhuman levels, to the point where the AI is smart enough to start developing nanotech and / or socially engineering humans for bootstrapping needs.
So even if labs were carefully monitoring for RSI and trying to avoid it (rather than deliberately engineering for it + frog boiling in the meantime), an AI inclined to take over might find that it doesn’t even need to bother with potentially dicey self-modifications until after it has already secured victory.
I would require for “AGI”:
Ability of learning from sensory data only (no raw text necessary, contrary to LLMs)
Active inference / predictive coding
Continual learning
At least that’s what animals (including humans) can do. Though there might then be system that intuitively qualifies as ASI but not as AGI. E.g. a very smart LLM.