So, my understanding of ASI is that it’s supposed to mean “A system that is vastly more capable than the best humans at essentially all important cognitive tasks.” Currently, AI’s are indeed more capable, possibly even vastly more capable, than humans at a bunch of tasks, but they are not more capable at all important cognitive tasks. If they were, they could easily do my job, which they currently cannot.
Two terms I use in my own head, that largely correlate with my understanding of what people meant by the old AGI/ASI:
“Drop in remote worker”—A system with the capabilities to automate a large chunk of remote workers (I’ve used 50% before, but even 10% would be enough to change a lot) by doing the job of that worker with similar oversight and context as a human contractor. In this definition, the model likely gets a lot of help to set up, but then can work autonomously. E.g. if Claude Opus 4.5 could do this, but couldn’t have built Claude Code for itself, that’s fine.
This AI is sufficient to cause severe economic disruption and likely to advance AI R&D considerably.
“Minimum viable extinction”—A system with the capabilities to destroy all humanity, if it desires to. (The system is not itself required to survive this) This is when we get to the point of sufficiently bad alignment failures not giving us a second try. Unfortunately, this one is quite hard to measure, especially if the AI itself doesn’t want to be measured.
I think this one sounds like it describes a single level of capability, but quietly assumes a that the capabilities of “a remote worker” are basically static compared to the speed of capabilities growth. A late-2025 LLM with the default late-2025 LLM agent scaffold provided by the org releasing that model (e.g. chatgpt.com for openai) would have been able to do many of the jobs posted in 2022 to Upwork. But these days, before posting a job to Upwork, most people will at least try running their request by ChatGPT to see if it can one-shot it, and so those exact jobs no longer exist. The jobs which still exist are those which require some capabilities that are not available to anyone with a browser and $20 to their name.
This is a fine assumption if you expect AI capabilities to go from “worse than humans at almost everything” to “better than humans at almost everything” in short order, much much faster than the ability of “legacy” organizations to adapt to them. I think that worldview is pretty well summarized by the graph from the waitbutwhy AI article:
But if the time period isn’t short, we may instead see that “drop-in remote worker” is a moving target in the same way “AGI” is, and so me may get AI with scary capabilities we care about without getting a clear indication like “you can now hire a drop-in AI worker that is actually capable of all the things you would hire ahuman to do”.
Interesting. You have convinced me that I need a better definition for this approximate level of capabilities. I do expect AI to advance faster than legacy organisations will adapt, such that it would be possible to have a world of “10% of jobs can be done by AI” but the AI capabilities need to be higher than “Can replace 10% of jobs in 2022″.
So, my understanding of ASI is that it’s supposed to mean “A system that is vastly more capable than the best humans at essentially all important cognitive tasks.” Currently, AI’s are indeed more capable, possibly even vastly more capable, than humans at a bunch of tasks, but they are not more capable at all important cognitive tasks. If they were, they could easily do my job, which they currently cannot.
Two terms I use in my own head, that largely correlate with my understanding of what people meant by the old AGI/ASI:
“Drop in remote worker”—A system with the capabilities to automate a large chunk of remote workers (I’ve used 50% before, but even 10% would be enough to change a lot) by doing the job of that worker with similar oversight and context as a human contractor. In this definition, the model likely gets a lot of help to set up, but then can work autonomously. E.g. if Claude Opus 4.5 could do this, but couldn’t have built Claude Code for itself, that’s fine.
This AI is sufficient to cause severe economic disruption and likely to advance AI R&D considerably.
“Minimum viable extinction”—A system with the capabilities to destroy all humanity, if it desires to. (The system is not itself required to survive this) This is when we get to the point of sufficiently bad alignment failures not giving us a second try. Unfortunately, this one is quite hard to measure, especially if the AI itself doesn’t want to be measured.
I think this one sounds like it describes a single level of capability, but quietly assumes a that the capabilities of “a remote worker” are basically static compared to the speed of capabilities growth. A late-2025 LLM with the default late-2025 LLM agent scaffold provided by the org releasing that model (e.g. chatgpt.com for openai) would have been able to do many of the jobs posted in 2022 to Upwork. But these days, before posting a job to Upwork, most people will at least try running their request by ChatGPT to see if it can one-shot it, and so those exact jobs no longer exist. The jobs which still exist are those which require some capabilities that are not available to anyone with a browser and $20 to their name.
This is a fine assumption if you expect AI capabilities to go from “worse than humans at almost everything” to “better than humans at almost everything” in short order, much much faster than the ability of “legacy” organizations to adapt to them. I think that worldview is pretty well summarized by the graph from the waitbutwhy AI article:
But if the time period isn’t short, we may instead see that “drop-in remote worker” is a moving target in the same way “AGI” is, and so me may get AI with scary capabilities we care about without getting a clear indication like “you can now hire a drop-in AI worker that is actually capable of all the things you would hire ahuman to do”.
Interesting. You have convinced me that I need a better definition for this approximate level of capabilities. I do expect AI to advance faster than legacy organisations will adapt, such that it would be possible to have a world of “10% of jobs can be done by AI” but the AI capabilities need to be higher than “Can replace 10% of jobs in 2022″.
I still find that WBW post series useful to send to people, 10 years after it was published. Remarkably good work, that.