In a competitive market, companies pay wages equal to Value of Marginal Product of Labor (VMPL) = P * MPL (Price of marginal output * Marginal Product per hour). (In programming, each output is like a new feature or bug fix, which don’t have prices attached, so P here is actually more like the perceived/estimated value (impact on company revenue or cost) of the output.)
When AI increases MPL, it can paradoxically decrease VMPL by decreasing P more, even if there are no new entrants in the programming labor market. This is because each company has a limited (especially in the short run) set of high value potential programming work to be done, which can be quickly exhausted by the enhanced programming productivity, leaving only low-value marginal work.
A more detailed explanation expanded by AI from the above, in case it’s too terse to understand.
Let’s unpack why programmer wages might stagnate or even fall when tools like AI dramatically increase individual productivity, even if the number of programmers available hasn’t changed. The core idea lies in how wages are determined in economic theory and how hyper-productivity can affect the value of the work being done at the margin.
1. How Are Wages Typically Determined? The VMPL = Wage Rule
In standard economic models of competitive labor markets, a company will hire workers (or more specifically, worker-hours) up to the point where the value generated by the last worker hired (the “marginal” worker) is just equal to the wage that worker must be paid. This value is called the Value of Marginal Product of Labor (VMPL).
VMPL = P * MPL
Let’s break down those components:
MPL (Marginal Product of Labor): This is the physical increase in output generated by adding one more unit of labor (e.g., one more hour of programming work). When you use AI assistance, you can produce more code, fix bugs faster, or complete features in less time. So, AI unambiguously increases your MPL. You are physically more productive per hour.
P (Price or Value of Output): This is the value the company gets from the specific output produced by that marginal hour of labor.
For factory workers making identical widgets, ‘P’ is simply the market price of one widget.
For programmers, it’s more complex. Features and bug fixes don’t usually have individual price tags. Instead, ‘P’ here represents the estimated value or business impact that the company assigns to the work done in that hour. This could be its impact on revenue, user retention, cost savings, strategic goals, etc. Crucially, different tasks have different perceived values (‘P’). Fixing a critical production bug has a much higher ‘P’ than tweaking a minor UI element.
So, the rule is: Wage = VMPL = (Value of Marginal Output) * (Marginal Output per Hour).
2. How AI Creates a Potential Paradox: Increasing MPL Can Decrease Marginal ‘P’
AI clearly boosts the MPL part of the equation. You get more done per hour. Naively, one might think this directly increases VMPL and thus wages. However, AI’s impact on the marginal ‘P’ is the key to the paradox.
Here’s the mechanism, focusing solely on the existing workforce (no new programmers):
Finite High-Value Work: Every company, at any given time, has a limited set of programming tasks it considers high-priority and high-value. Think of core features, major architectural improvements, critical bug fixes. There’s also a long list of lower-priority tasks: minor enhancements, refactoring less critical code, documentation updates, exploring speculative ideas.
Productivity Exhausts High-Value Tasks Faster: When programmers become 2x or 3x more productive thanks to AI, they can complete the company’s high-value task list much more quickly than before.
The Need to Fill Hours: Assuming the company still employs the same number of programmers for the same number of hours (perhaps due to contracts, wanting to retain talent, or needing ongoing maintenance), what happens once the high-value backlog is depleted faster? Managers need to assign some work to fill those paid hours.
Assigning Lower-Value Marginal Work: The work assigned during those hours will increasingly be drawn from the lower end of the priority list. The last few hours the company decides to pay for across its programming staff (the marginal hours) will be dedicated to tasks with significantly lower perceived business value (‘P’).
The Marginal ‘P’ Falls: The very productivity enhancement provided by AI leads directly to a situation where the value (‘P’) of the task performed during the marginal hour decreases significantly.
3. The Impact on VMPL and Wages
Now let’s look at the VMPL for that crucial marginal hour of programming work:
VMPL_marginal = P_marginal_task * MPL_boosted
Even though MPL_boosted is higher than before, if P_marginal_task has fallen proportionally more due to task saturation, the resulting VMPL_marginal can actually be lower than the VMPL_marginal before the AI productivity boost.
Example:
Before AI: Marginal hour MPL = 1 unit of work, Value P = $100/unit → VMPL = $100. Wage = $100.
After AI: Productivity doubles, MPL = 2 units/hour. But all $100-value tasks are done quickly. The marginal hour is now spent on a task valued at P = $40/unit.
New VMPL_marginal: $40/unit * 2 units/hour = $80.
Result: The maximum wage the company is willing to pay for that marginal hour (and thus, the market wage for all similar hours) could fall to $80, even though individual programmers are producing more output per hour.
Conclusion:
This explanation shows how a massive increase in individual productivity (MPL) can, somewhat paradoxically, lead to stagnant or falling wages even without any change in the number of available programmers. The mechanism is the saturation of high-value work. Because companies hire based on the value generated at the margin, and high productivity allows that margin to quickly shift to lower-value tasks, the perceived value (‘P’) of that marginal work can decrease significantly, potentially offsetting or even overwhelming the gains in physical productivity (MPL) when calculating the VMPL that determines wages.
I initially tried to use Gemini 2.5 Pro to write the whole explanation, but it kept making one mistake after another in its economics reasoning. Each rewrite would contain a new mistake after I pointed out the last one, or it would introduce a new mistake when I asked for some other kind of change. After pointing out 8 mistakes like this, I finally gave up and wrote it myself. I also tried Grok 3 and Claude 3.7 Sonnet but gave up more quickly on them after the initial responses didn’t look promising. However AI still helped a bit by reminding me of the right concepts/vocabulary.
Thought it would be worth noting this, as it seems a bit surprising. (Supposedly “phd-level” AI failing badly on an Econ 101 problem.) Here is the full transcript in case anyone is curious. Digging into this a bit myself, it appears that the “phd-level” claim is based on performance on GPQA, which includes Physics, Chemistry, and Biology, but not Economics.
This is because each company has a limited (especially in the short run) set of high value potential programming work to be done, which can be quickly exhausted by the enhanced programming productivity, leaving only low-value marginal work.
This makes perfect sense when you put it this way, and yet I imagine that if tried to make a similar argument on internet, I would immediately get a “Lump of labour fallacy” reply.
(I guess the problem is with words such as “short term”. Are we talking weeks, months, years? In a relatively static economy, or approaching singularity? Basically the speed of discovering new high-value work vs the speed such work becomes obsolete.)
In a competitive market, companies pay wages equal to Value of Marginal Product of Labor (VMPL) = P * MPL (Price of marginal output * Marginal Product per hour). (In programming, each output is like a new feature or bug fix, which don’t have prices attached, so P here is actually more like the perceived/estimated value (impact on company revenue or cost) of the output.)
When AI increases MPL, it can paradoxically decrease VMPL by decreasing P more, even if there are no new entrants in the programming labor market. This is because each company has a limited (especially in the short run) set of high value potential programming work to be done, which can be quickly exhausted by the enhanced programming productivity, leaving only low-value marginal work.
A more detailed explanation expanded by AI from the above, in case it’s too terse to understand.
Let’s unpack why programmer wages might stagnate or even fall when tools like AI dramatically increase individual productivity, even if the number of programmers available hasn’t changed. The core idea lies in how wages are determined in economic theory and how hyper-productivity can affect the value of the work being done at the margin.
1. How Are Wages Typically Determined? The VMPL = Wage Rule
In standard economic models of competitive labor markets, a company will hire workers (or more specifically, worker-hours) up to the point where the value generated by the last worker hired (the “marginal” worker) is just equal to the wage that worker must be paid. This value is called the Value of Marginal Product of Labor (VMPL).
VMPL = P * MPL
Let’s break down those components:
MPL (Marginal Product of Labor): This is the physical increase in output generated by adding one more unit of labor (e.g., one more hour of programming work). When you use AI assistance, you can produce more code, fix bugs faster, or complete features in less time. So, AI unambiguously increases your MPL. You are physically more productive per hour.
P (Price or Value of Output): This is the value the company gets from the specific output produced by that marginal hour of labor.
For factory workers making identical widgets, ‘P’ is simply the market price of one widget.
For programmers, it’s more complex. Features and bug fixes don’t usually have individual price tags. Instead, ‘P’ here represents the estimated value or business impact that the company assigns to the work done in that hour. This could be its impact on revenue, user retention, cost savings, strategic goals, etc. Crucially, different tasks have different perceived values (‘P’). Fixing a critical production bug has a much higher ‘P’ than tweaking a minor UI element.
So, the rule is: Wage = VMPL = (Value of Marginal Output) * (Marginal Output per Hour).
2. How AI Creates a Potential Paradox: Increasing MPL Can Decrease Marginal ‘P’
AI clearly boosts the MPL part of the equation. You get more done per hour. Naively, one might think this directly increases VMPL and thus wages. However, AI’s impact on the marginal ‘P’ is the key to the paradox.
Here’s the mechanism, focusing solely on the existing workforce (no new programmers):
Finite High-Value Work: Every company, at any given time, has a limited set of programming tasks it considers high-priority and high-value. Think of core features, major architectural improvements, critical bug fixes. There’s also a long list of lower-priority tasks: minor enhancements, refactoring less critical code, documentation updates, exploring speculative ideas.
Productivity Exhausts High-Value Tasks Faster: When programmers become 2x or 3x more productive thanks to AI, they can complete the company’s high-value task list much more quickly than before.
The Need to Fill Hours: Assuming the company still employs the same number of programmers for the same number of hours (perhaps due to contracts, wanting to retain talent, or needing ongoing maintenance), what happens once the high-value backlog is depleted faster? Managers need to assign some work to fill those paid hours.
Assigning Lower-Value Marginal Work: The work assigned during those hours will increasingly be drawn from the lower end of the priority list. The last few hours the company decides to pay for across its programming staff (the marginal hours) will be dedicated to tasks with significantly lower perceived business value (‘P’).
The Marginal ‘P’ Falls: The very productivity enhancement provided by AI leads directly to a situation where the value (‘P’) of the task performed during the marginal hour decreases significantly.
3. The Impact on VMPL and Wages
Now let’s look at the VMPL for that crucial marginal hour of programming work:
VMPL_marginal = P_marginal_task * MPL_boosted
Even though MPL_boosted is higher than before, if P_marginal_task has fallen proportionally more due to task saturation, the resulting VMPL_marginal can actually be lower than the VMPL_marginal before the AI productivity boost.
Example:
Before AI: Marginal hour MPL = 1 unit of work, Value P = $100/unit → VMPL = $100. Wage = $100.
After AI: Productivity doubles, MPL = 2 units/hour. But all $100-value tasks are done quickly. The marginal hour is now spent on a task valued at P = $40/unit.
New VMPL_marginal: $40/unit * 2 units/hour = $80.
Result: The maximum wage the company is willing to pay for that marginal hour (and thus, the market wage for all similar hours) could fall to $80, even though individual programmers are producing more output per hour.
Conclusion:
This explanation shows how a massive increase in individual productivity (MPL) can, somewhat paradoxically, lead to stagnant or falling wages even without any change in the number of available programmers. The mechanism is the saturation of high-value work. Because companies hire based on the value generated at the margin, and high productivity allows that margin to quickly shift to lower-value tasks, the perceived value (‘P’) of that marginal work can decrease significantly, potentially offsetting or even overwhelming the gains in physical productivity (MPL) when calculating the VMPL that determines wages.
I initially tried to use Gemini 2.5 Pro to write the whole explanation, but it kept making one mistake after another in its economics reasoning. Each rewrite would contain a new mistake after I pointed out the last one, or it would introduce a new mistake when I asked for some other kind of change. After pointing out 8 mistakes like this, I finally gave up and wrote it myself. I also tried Grok 3 and Claude 3.7 Sonnet but gave up more quickly on them after the initial responses didn’t look promising. However AI still helped a bit by reminding me of the right concepts/vocabulary.
Thought it would be worth noting this, as it seems a bit surprising. (Supposedly “phd-level” AI failing badly on an Econ 101 problem.) Here is the full transcript in case anyone is curious. Digging into this a bit myself, it appears that the “phd-level” claim is based on performance on GPQA, which includes Physics, Chemistry, and Biology, but not Economics.
This makes perfect sense when you put it this way, and yet I imagine that if tried to make a similar argument on internet, I would immediately get a “Lump of labour fallacy” reply.
(I guess the problem is with words such as “short term”. Are we talking weeks, months, years? In a relatively static economy, or approaching singularity? Basically the speed of discovering new high-value work vs the speed such work becomes obsolete.)