Edit (27/01/25):
Thank you everyone who gave thoughtful comments and feedback!
On reflection, and after reading in ‘Deep Utopia’ about Keynes’ 1930 prediction that we would each work 15-hour work-weeks today[1], I think it’s both likely and desirable that humans will work less hours/week in future. That would be ‘falling employment’.
But AI doing more work does not necessitate humans doing less. It’s not a logical implication, due the fact that the amount of work done can expand. The fact of this being a non-implication is important to me, and that is the point that I make in this post.[2]
TL;DR: As we deploy AI, the total amount of work being done will increase, and the % done by humans will fall. We cannot say from that alone whether, or how much, human employment will decline.
Sometimes, I hear economists make this argument about transformative AI:
I’ll believe it when it starts showing up in the GDP/employment statistics!
I think transformative AI will increase GDP. However, this does not necessitate a decline in human employment.
Anthropic CEO Dario Amodei imagines advanced AI as a “country of geniuses in a datacenter”. If such a country spontaneously sprang up tomorrow, I don’t think it would reduce human employment. Investors might want to re-allocate capital towards the country, but the country would require some inputs that it’s unable to self-supply.[3]
It is possible that human and AI inputs could be complementary to each other — by default or because they are legislated to be.
~4 billion humans and ~100 billion non-human worker-equivalents currently work (BOTEC). A ‘worker-equivalent’ here means ‘the amount of work one average 1700 human worker could perform in a year.’ From 1900 to 2020, human labor input grew by ~2.5× while total economic work grew by ~16×, meaning most additional work was done by machines. On this BOTEC, only 4% of work is done by humans today.[4]
Some economists model that the amount of work done in the future will be the same as the amount of work done today. In Korinek and Suh’s ‘Scenarios for the Transition to AGI’:
The distribution function Φ(i) reflects the cumulative mass of tasks with complexity ≤ i and satisfies Φ(0) = 0 and Φ(i) → 1 as i → ∞.
In this model, task measure is fixed, and we start out with humans doing every task.
But we could productively deploy more labor than we currently have. In reality, task measure is not fixed, and we are not capped at the ~4 billion human jobs (and ~100 billion non-human jobs) being done today.
We could have (in effect) 1 trillion workers, 0.04% of whom are humans in management/oversight/monitoring roles, with no hit to human employment.
The total amount of work being done will increase, and the % done by humans will fall. We cannot say from that alone whether, or how much, human employment will decline.
- ^
Bostrom explains that we have so far prioritized consumption over leisure
- ^
Separately, I think a ‘good outcome’ might look like ‘UBI with strings attached’: 1-5 hours of economically productive work/year. When saying so, I invoke this quote from Nick Bostrom: “We are not trying to predict what will happen. Rather, we are investigating what we can hope will happen if things go well.”
- ^
Humans would provide maintenance the AIs can’t self-provide, supply direction, check decisions the AI systems are uncertain about, monitor activations, and bear accountability for decisions made by AI agents on their prerogative.
- ^
the exact number may vary, depending which year you set as baseline and how you run the BOTEC; this is compatible with the broader point.
Mind: Brain Replacement isn’t Brain Augmentation.
History, and much of the 96% of non-human work as you call it—and however you define that exact number—were mainly all sorts of types of brain augmentation, i.e. brain extension beyond our arms and mouths, using horses, ploughs, speakers, and all types of machines, worksheets, what have you.
AI, advanced AI, incontrast, is more and more sidelining that hitherto essential piece or monopolist, alias human brain.
And so, whatever the past, there is a structural break happening right now.
And so, you and the many others who ignore that one simple phrase I suggest remembering: Brain Replacement isn’t Brain Augmentation—risk to wake up baffled in not so long a future. This at least would seem to be the very natural course of things to expect absent doom. Then, indeed the future is weird and who knows anyway. Maybe it’s so weird that one way or other you’ll still be right—I just really wouldn’t bet on it they way you seem to argue for.
What percentage of humans do you think will be capable of providing supervision to AGIs?
Think about theory of the firm: the firm is the largest portion of the economy that is better run through an authoritarian central planning regime rather than using the market prices to orient and organize production.
What we have seen with informatics over the past few decades is exactly that bigger is getting better. For years now the small-cap factor no longer works. J.P. Morgan Chase & Co. is world’s largest bank and it’s outperforming the industry. Amazon is capable to coordinate more than 1.5 million full-time employees worldwide.
As AGI accelerates this trend, there’s no reason to imagine we won’t see further consolidation. Yeah, sure, some people like Pepsi and other people like Coca-Cola. But likely there won’t be 2,000 different soda brands that each one needs to be individual oversight by humans.
If you can organize production more through central planning through informatics and AGI, I dunno there will be much work left for humans to do.
And obviously, people on LW are überbulls on ASI. The view is that it’ll get millions of times smarter, whatever you define, than humans.