I think this one sounds like it describes a single level of capability, but quietly assumes a that the capabilities of “a remote worker” are basically static compared to the speed of capabilities growth. A late-2025 LLM with the default late-2025 LLM agent scaffold provided by the org releasing that model (e.g. chatgpt.com for openai) would have been able to do many of the jobs posted in 2022 to Upwork. But these days, before posting a job to Upwork, most people will at least try running their request by ChatGPT to see if it can one-shot it, and so those exact jobs no longer exist. The jobs which still exist are those which require some capabilities that are not available to anyone with a browser and $20 to their name.
This is a fine assumption if you expect AI capabilities to go from “worse than humans at almost everything” to “better than humans at almost everything” in short order, much much faster than the ability of “legacy” organizations to adapt to them. I think that worldview is pretty well summarized by the graph from the waitbutwhy AI article:
But if the time period isn’t short, we may instead see that “drop-in remote worker” is a moving target in the same way “AGI” is, and so me may get AI with scary capabilities we care about without getting a clear indication like “you can now hire a drop-in AI worker that is actually capable of all the things you would hire ahuman to do”.
Interesting. You have convinced me that I need a better definition for this approximate level of capabilities. I do expect AI to advance faster than legacy organisations will adapt, such that it would be possible to have a world of “10% of jobs can be done by AI” but the AI capabilities need to be higher than “Can replace 10% of jobs in 2022″.
I think this one sounds like it describes a single level of capability, but quietly assumes a that the capabilities of “a remote worker” are basically static compared to the speed of capabilities growth. A late-2025 LLM with the default late-2025 LLM agent scaffold provided by the org releasing that model (e.g. chatgpt.com for openai) would have been able to do many of the jobs posted in 2022 to Upwork. But these days, before posting a job to Upwork, most people will at least try running their request by ChatGPT to see if it can one-shot it, and so those exact jobs no longer exist. The jobs which still exist are those which require some capabilities that are not available to anyone with a browser and $20 to their name.
This is a fine assumption if you expect AI capabilities to go from “worse than humans at almost everything” to “better than humans at almost everything” in short order, much much faster than the ability of “legacy” organizations to adapt to them. I think that worldview is pretty well summarized by the graph from the waitbutwhy AI article:
But if the time period isn’t short, we may instead see that “drop-in remote worker” is a moving target in the same way “AGI” is, and so me may get AI with scary capabilities we care about without getting a clear indication like “you can now hire a drop-in AI worker that is actually capable of all the things you would hire ahuman to do”.
Interesting. You have convinced me that I need a better definition for this approximate level of capabilities. I do expect AI to advance faster than legacy organisations will adapt, such that it would be possible to have a world of “10% of jobs can be done by AI” but the AI capabilities need to be higher than “Can replace 10% of jobs in 2022″.
I still find that WBW post series useful to send to people, 10 years after it was published. Remarkably good work, that.