The inverse argument I have seen on reddit happens if you try to examine how these ai models might work and learn.
One method is to use a large benchmark of tasks, where model capabilities is measured as the weighted harmonic mean of all tasks.
As the models run, much of the information gained doing real world tasks is added as training and test tasks to the benchmark suite. (You do this whenever a chat task has an output that can be objectively checked, and for robotic tasks you run in lockstep a neural sim similar to Sora that makes testable predictions for future real world input sets)
What this means is most models learn from millions of parallel instances of themselves and other models.
This means the more models are deployed in the world—the more labor is automated—the more this learning mechanism gets debugged, and the faster models learn, and so on.
There are also all kinds of parallel task gains. For example once models have experience working on maintaining the equipment in a coke can factory, and an auto plant, and a 3d printer plant, this variety of tasks with common elements should cause new models trained in sim to gain “general maintenance” skills at least for machines that are similar to the 3 given. (The “skill” is developing a common policy network that compresses the 3 similar policies down to 1 policy on the new version of the network)
With each following task, the delta—the skills the AI system needs to learn it doesn’t already know—shrinks. This shrinking learning requirement likely increases faster than the task difficulty increases. (Since the most difficult tasks is still doable by a human, and also the AI system is able to cheat a bunch of ways. For example using better actuators to make skilled manual trades easy, or software helpers to best champion Olympiad contestants)
You have to then look at what barriers there are to AI doing a given task to decide what tasks are protected for a while.
Things that just require a human body to do:
Medical test subject.
Food taster, perfume evaluator, fashion or aesthetics evaluator.
Various kinds of personal service worker.
AI Supervisor roles:
Arguably checking that the models haven’t betrayed us yet, and sanity checking plans and outputs seem like this would be a massive source of employment.
AI developer roles :
the risks mean that some humans need to have a deep understanding of how the current gen of AI works, and the tools and time to examine what happened during a failure. Someone like this needs to be skeptical of an explanation by another ai system for the obvious reasons.
Government/old institution roles:
Institutions that don’t value making a profit may continue using human staff for decades after AI can do their jobs, even when it can be shown ai makes less errors and more legally sound decisions.
TLDR: Arguably for the portion of jobs that can be automated, the growth rate should be exponential, from the easiest and most common jobs to the most difficult and unique ones.
There is a portion of tasks that humans are required to do for a while, and a portion where it might be a good idea not to ever automate it.
The inverse argument I have seen on reddit happens if you try to examine how these ai models might work and learn.
One method is to use a large benchmark of tasks, where model capabilities is measured as the weighted harmonic mean of all tasks.
As the models run, much of the information gained doing real world tasks is added as training and test tasks to the benchmark suite. (You do this whenever a chat task has an output that can be objectively checked, and for robotic tasks you run in lockstep a neural sim similar to Sora that makes testable predictions for future real world input sets)
What this means is most models learn from millions of parallel instances of themselves and other models.
This means the more models are deployed in the world—the more labor is automated—the more this learning mechanism gets debugged, and the faster models learn, and so on.
There are also all kinds of parallel task gains. For example once models have experience working on maintaining the equipment in a coke can factory, and an auto plant, and a 3d printer plant, this variety of tasks with common elements should cause new models trained in sim to gain “general maintenance” skills at least for machines that are similar to the 3 given. (The “skill” is developing a common policy network that compresses the 3 similar policies down to 1 policy on the new version of the network)
With each following task, the delta—the skills the AI system needs to learn it doesn’t already know—shrinks. This shrinking learning requirement likely increases faster than the task difficulty increases. (Since the most difficult tasks is still doable by a human, and also the AI system is able to cheat a bunch of ways. For example using better actuators to make skilled manual trades easy, or software helpers to best champion Olympiad contestants)
You have to then look at what barriers there are to AI doing a given task to decide what tasks are protected for a while.
Things that just require a human body to do: Medical test subject. Food taster, perfume evaluator, fashion or aesthetics evaluator. Various kinds of personal service worker.
AI Supervisor roles: Arguably checking that the models haven’t betrayed us yet, and sanity checking plans and outputs seem like this would be a massive source of employment.
AI developer roles : the risks mean that some humans need to have a deep understanding of how the current gen of AI works, and the tools and time to examine what happened during a failure. Someone like this needs to be skeptical of an explanation by another ai system for the obvious reasons.
Government/old institution roles: Institutions that don’t value making a profit may continue using human staff for decades after AI can do their jobs, even when it can be shown ai makes less errors and more legally sound decisions.
TLDR: Arguably for the portion of jobs that can be automated, the growth rate should be exponential, from the easiest and most common jobs to the most difficult and unique ones.
There is a portion of tasks that humans are required to do for a while, and a portion where it might be a good idea not to ever automate it.