Why falling labor share ≠ falling employment

Link post

Edit (27/​01/​25):

Thank you everyone who gave thoughtful comments and feedback!

On reflection, and after reading in ‘Deep Utopia’ about Keynes’ 1930 prediction that we would each work 15-hour work-weeks today[1], I think it’s both likely and desirable that humans will work less hours/​week in future. That would be ‘falling employment’.

But AI doing more work does not necessitate humans doing less. It’s not a logical implication, due the fact that the amount of work done can expand. The fact of this being a non-implication is important to me, and that is the point that I make in this post.[2]

TL;DR: As we deploy AI, the total amount of work being done will increase, and the % done by humans will fall. We cannot say from that alone whether, or how much, human employment will decline.

Sometimes, I hear economists make this argument about transformative AI:

I’ll believe it when it starts showing up in the GDP/​employment statistics!

I think transformative AI will increase GDP. However, this does not necessitate a decline in human employment.

Anthropic CEO Dario Amodei imagines advanced AI as a “country of geniuses in a datacenter”. If such a country spontaneously sprang up tomorrow, I don’t think it would reduce human employment. Investors might want to re-allocate capital towards the country, but the country would require some inputs that it’s unable to self-supply.[3]

It is possible that human and AI inputs could be complementary to each other — by default or because they are legislated to be.

~4 billion humans and ~100 billion non-human worker-equivalents currently work (BOTEC). A ‘worker-equivalent’ here means ‘the amount of work one average 1700 human worker could perform in a year.’ From 1900 to 2020, human labor input grew by ~2.5× while total economic work grew by ~16×, meaning most additional work was done by machines. On this BOTEC, only 4% of work is done by humans today.[4]

Some economists model that the amount of work done in the future will be the same as the amount of work done today. In Korinek and Suh’s ‘Scenarios for the Transition to AGI’:

The distribution function Φ(i) reflects the cumulative mass of tasks with complexity ≤ i and satisfies Φ(0) = 0 and Φ(i) → 1 as i → ∞.

In this model, task measure is fixed, and we start out with humans doing every task.

But we could productively deploy more labor than we currently have. In reality, task measure is not fixed, and we are not capped at the ~4 billion human jobs (and ~100 billion non-human jobs) being done today.

We could have (in effect) 1 trillion workers, 0.04% of whom are humans in management/​oversight/​monitoring roles, with no hit to human employment.

The total amount of work being done will increase, and the % done by humans will fall. We cannot say from that alone whether, or how much, human employment will decline.

  1. ^

    Bostrom explains that we have so far prioritized consumption over leisure

  2. ^

    Separately, I think a ‘good outcome’ might look like ‘UBI with strings attached’: 1-5 hours of economically productive work/​year. When saying so, I invoke this quote from Nick Bostrom: ⁠“We are not trying to predict what will happen. Rather, we are investigating what we can hope will happen if things go well.”

  3. ^

    Humans would provide maintenance the AIs can’t self-provide, supply direction, check decisions the AI systems are uncertain about, monitor activations, and bear accountability for decisions made by AI agents on their prerogative.

  4. ^

    the exact number may vary, depending which year you set as baseline and how you run the BOTEC; this is compatible with the broader point.