After Scarcity: Economic Frameworks for Zero Marginal Cost Goods

I’m a data architect by trade, not an economist, though finance and economics are a hobby and personal interest. I am an AI accelerationist, but believe we have a duty to proactively seek solutions to the problems AI may create.


Standard economic frameworks assume scarcity. This isn’t a result of the frameworks. It’s a load-bearing premise baked into the math before the first model is built. When that premise fails, the frameworks don’t just underperform. They actively recommend recreating the problem they were designed to solve.

We’re already living this:

The DMCA is economic policy whose primary function is reimposing scarcity on goods that physics made non-scarce.

Patent monopolies on pharmaceuticals are tollbooths on abundance.

Paywalls, proprietary APIs, license keys: institutional infrastructure for a world where marginal cost of reproduction is zero but the revenue model requires pretending otherwise.

AI doesn’t create this problem. But it DOES scale it past the point where it can be rationalized away.


The Neoclassical Case, and Where It Breaks:

The strongest version of the pro-market argument is Hayekian: prices aggregate distributed information that no central planner can access. This is correct and important and I don’t want to wave it away.

The problem is that the information-aggregation function of prices requires two conditions: rivalry (my consumption reduces yours) and excludability (you can prevent non-payers from accessing the good). Remove either and the price signal stops reflecting real resource constraints. It reflects artificial ones, imposed to make the signal exist at all.

Software is non-rivalrous by physics. One more person running inference on a language model doesn’t degrade anyone else’s access. When you price it above zero, you’re not discovering information about real scarcity. You’re constructing scarcity to preserve a revenue model. The Hayekian defense doesn’t apply because the conditions that make prices informative aren’t present.

This already covers a substantial portion of the economy: software, media, pharmaceutical formulas, AI inference. Production costs are dominated by fixed costs (R&D, training runs) and marginal costs approach zero. Temporary monopoly grants (patents, copyright) exist to allow fixed cost recovery, and that logic has some validity. The problem is that monopoly rents have grown to exceed fixed costs by orders of magnitude, and the deadweight loss from restricted access is large and growing. The original justification has long since stopped doing the work required of it.


Labor-Based Distribution:

The intuition that income should be proportional to economic contribution sounds like a moral principle (though it’s certainly enshrined in plenty of religious dogma). It’s a description of what happens when workers have leverage. Wages are negotiation outcomes reflecting the balance of power between workers with a skill and employers who want it, a.k.a. supply and demand.

This matters because leverage and contribution make different predictions under automation. If wages track leverage, automating a skill destroys wages for that skill directly. If wages track genuine contribution, automating a skill should free workers to contribute elsewhere and aggregate wages should be stable or rise.

AI is a qualitatively different case because it’s attacking cognitive work broadly and simultaneously rather than automating specific tasks sequentially. The historical pattern of “disrupted workers retrain for the next tier of cognitive work” depends on there always being a next tier requiring human leverage. That assumption is now genuinely uncertain in a way it wasn’t before.

The implication: “earn your keep through labor market contribution” is a weak foundation for a distributive principle in an AI economy. It was always a proxy for leverage. You need a different principle, and you probably want to figure out what it is before the leverage is fully gone.


The UBI Argument, Stated Structurally:

The usual case for UBI is grounded in what people deserve. These arguments aren’t wrong but they’re easy to dismiss.

The structural argument is harder. An AI economy with concentrated ownership and no mass distribution mechanism is internally incoherent. AI automates production, reducing labor income. Reduced labor income reduces consumption. Reduced consumption reduces demand for AI-produced goods. Capital that can’t find productive investment bids up existing assets instead. This isn’t a prediction. It’s a description of the last fifteen years.

UBI or universal basic services closes this loop. Without it you don’t get an abundance economy. You get extraordinary productive capacity running at partial utilization because the people who would benefit from it can’t pay for it.


The Coordination Problem:

Markets solve a specific coordination problem: how do you allocate rival excludable goods across millions of actors with private information, without a central authority? For non-rival goods, the problem structure is different. The question is what gets produced, at what quality, directed at whose needs.

The three leading alternatives all have serious failure modes. Democratic allocation captures majority preferences while overriding minority needs, with short time horizons and no mechanism for preference intensity. Technocratic planning: the 20th century ran this experiment and the information problem Hayek identified turned out to be real. Algorithmic governance optimizes for revealed preferences that are already shaped by existing prices and power distributions — you’re encoding the current distribution into the objective function and calling it neutral.

What Ostrom’s work actually suggests is that the search for a single universal coordination mechanism to replace markets is the wrong frame. Successful commons governance is institutional, local, and specific. Polycentric governance, meaning multiple overlapping institutions managing different aspects of shared resources, consistently outperforms both privatization and central control on her metrics.

The AI commons governance problem is that scale seems to break this. Whether Ostrom’s principles generalize to a global AI infrastructure serving billions of people is an open empirical question. I don’t have a confident answer.

What I’m more confident about: commodification of naturally non-scarce goods isn’t a neutral default. It’s a choice that produces specific distributional outcomes, namely rents flowing to whoever controls the access layer, and it forecloses other institutional possibilities before they can develop. The argument for enclosure is usually fixed cost recovery. That’s legitimate up to the point where rents exceed fixed costs. Past that point it’s rent extraction wearing the clothes of cost recovery.

The path dependence concern is real. The enclosure of the internet happened fast and the window for alternative institutional choices closed fast. The AI enclosure is moving faster. If better coordination mechanisms need to be developed, the time to do it is before incumbent institutions are fully entrenched.

No comments.