What’s A “Market”?

Economists have a very mathematically clean class of models of “markets”, and spill lots of ink arguing about how well this class of models applies to the markets of the real-world economy. I personally give relatively few shits about how well the mathematical notion of a market applies to real-world economic markets; I’m relatively more interested in applying the same models to systems in biology or ML/​AI. They’re very generalizable models.

Unfortunately, the mathematical notion of a “market” tends to be presented in math-heavy econ courses, and the parts I’d consider most central typically see surprisingly little coverage in more conceptual intro courses. So, this post aims to explain what I consider the central concepts of the mathematical notion of a market, without all the associated notation and jargon and proofs, in a way which lends itself to generalization beyond economics.

The Story About Apples And Bananas

We’ve got two people, Alice and Bob. Each of them can produce two goods, apples and bananas. Alice can use her land to produce five tons of apples, or one ton of bananas, or some proportional combination of the two. Bob can use his land to produce five tons of apples or twenty tons of bananas, or some proportional combination of the two. Both want a varied diet of apples and bananas.

… and you remember from econ 101 roughly how this goes, right? If the two just produce food for themselves separately, then each grows a mix of apples and bananas. But then Alice’ opportunity cost for one apple is 15 = 0.2 tons of bananas, whereas Bob’s opportunity cost for one apple is 205 = 4 tons bananas. So, the two could produce a pareto gain of apples and bananas by specializing: Alice can specialize more toward apple production, Bob can specialize more towards banana production. For instance, if Alice shifts production toward 1 more ton of apples, while Bob shifts production toward 1 less ton of apples, then together they produce − 0.2*1 + 4*1 = 3.8 tons more bananas with the same number of apples.

Now the key question for this post: when does this sort of specialization reach equilibrium? Under what conditions do Alice and Bob together decide that they’ve both specialized the correct amount, and don’t need to shift their production around any more?

In this exact example, they’ll only hit equilibrium once one of them is fully specialized—either Alice fully specialized in apples, or Bob fully specialized in bananas. Otherwise, they could always do better by specializing more. But in general, decreasing marginal returns might mean that both should be less-than-fully specialized—e.g. maybe both have some land better suited to apples and some better suited to bananas, so as they shift production their opportunity costs change.

So when will the two “reach equilibrium”? Well, when their opportunity costs are the same—i.e. when they have the same tradeoff between producing apples vs bananas.[1]

… and that’s a market.

More generally, we have:

  • A bunch of agents, and a bunch of goods.

  • Each agent has their own opportunity cost for each good, or marginal trade-off rate between goods.

  • At equilibrium, the trade-off rates are the same for all agents (otherwise they can achieve a pareto improvement by specializing more).

  • The “market” is the set of agents at equilibrium, and the “market prices” are the (shared) trade-off rates between goods.

Another Story: Stock Traders

We have a bunch of stock traders, each with a portfolio of stocks and cash, and a utility function over their portfolio which they seek to maximize. (We’ll assume, for simplicity, that the traders are not updating their beliefs over the course of this story, so we can ignore the “expectation” part of their expected utility maximization.)

Unlike the previous story, there’s no production, but each trader still has an implicit tradeoff rate between stocks—e.g. for each trader, there is some number of GOOG stocks which they’d be willing to trade for one TSLA. And if two traders have different tradeoff rates—e.g. they’re willing to trade off TSLA for GOOG in different ratios—then the two can make a mutually beneficial trade. The trader who’s willing to trade more GOOG for TSLA can “specialize more in GOOG” and the trader who’s willing to trade less GOOG for TSLA can “specialize more in TSLA”; they trade GOOG for TSLA and both end up with higher utility.[2]

So we have a similar equilibrium story to before: traders equilibrate when they’re all willing to trade off between different stocks and cash at the same ratios. And that defines a market: once the traders equilibrate, the ratios at which they’re collectively willing to trade are the “market prices”, and the traders themselves form the market.

Traders trade pairwise for mutual gains, until their tradeoff ratios (i.e. implied prices) all equilibrate.

Two side-notes on this example, both of which generalize.

First, to keep things simple here, I’m assuming/​approximating that there’s zero bid-ask spread, i.e. if a trader’s marginal tradeoff rate between GOOG and TSLA is 2:3 then they’re willing to trade either 3 GOOG for 2 TSLA or 2 TSLA for 3 GOOG. With nonzero bid-ask spread we need to represent the “prices” as intervals or polytopes or convex sets rather than real numbers, which makes things more mathematically complicated, but the intuitive picture remains similar.

Second, once there’s more than two goods, we still only need to keep track of one price for each good, rather than a tradeoff rate between each pair. Why? Well, suppose one trader prices GOOG at $150 and TSLA at $225 (a 2:3 price ratio). Since there’s zero spread, they’re willing to go either direction on either of those two trades ($ <-> GOOG and $ <-> TSLA), so implicitly, they’re willing to trade 3 GOOG for 2 TSLA (via 3 GOOG <-> $450 <-> 2 TSLA) in either direction. So, by specifying the marginal tradeoff rates with respect to one reference good (e.g. $, in this example), we specify all the pairwise marginal tradeoff rates implicitly.

Temperature

Time for an example which illustrates the breadth of applicability.

Isolated physical systems have two central thermodynamic quantities: internal energy, and (neg-)entropy. The rate at which those two “trade off” is temperature: temperature quantifies how much entropy changes if a unit of internal energy is added to the system. (Indeed, temperature is often defined that way.)

If two systems are at different temperatures, then they have different tradeoff rates between energy and entropy. So, a unit of energy can be removed from one and added to the other, with a net increase in entropy. Systems equilibrate when the tradeoff rates—i.e. temperatures—are the same.

So, we can view a bunch of physical systems at thermal equilibrium as a market. The “goods” are energy and entropy, and these can be exchanged between subsystems in order to increase overall entropy. At equilibrium, the shared “market price” is the temperature of all the systems.

The Key Principle: Pareto Optimality with Multiple Subsystems

Two key ideas unify these examples. First, there are multiple “goals”—maybe multiple goods, maybe multiple agents’ utilities, maybe energy and entropy—and the system “equilibrates” at a pareto optimum of those goals. Second, there are multiple subsystems, each of which can contribute to the different goals.

Pareto optimality implies an (implicit) set of “prices”—i.e. marginal tradeoff rates between the goals. As with the stock traders example, once we know the marginal tradeoff rates between each goal and one (arbitrary) reference goal, that specifies the full set of “prices”—the pairwise tradeoff rates between goals.

So, assuming each subsystem can’t achieve a pareto improvement in the goals by itself, each subsystem has an implicit set of prices for the goals. But before equilibration, those prices could be different for the different subsystems, in which case a marginal pareto improvement can be achieved by jointly tweaking any two subsystems with different prices. Equilibrium is defined by the absence of any more pareto improvements, which implies that the subsystems all have the same implied prices.

And that’s the core concept of a market: a bunch of subsystems at pareto optimality with respect to a bunch of goals. The core mathematical reason why this kind of “market” is interesting and useful to think about is that, whenever a bunch of subsystems are at pareto optimality with respect to a bunch of goals, they must all have the same implied “prices” (i.e. marginal tradeoff rates) for those goals. Whenever two subsystems have different implied prices, a marginal pareto gain can be achieved by jointly optimizing the two, so the two are “out of equilibrium”.

  1. ^

    Full specialization is included as a corner-case of “opportunity costs are the same”. Conceptually: when there’s a “corner” in the production curve, we can view the slope of the curve (i.e. the opportunity cost) as being anything between the cost on one side and the cost on the other side. In that case, the tradeoff rate has to be represented as an interval or polytope or convex set rather than a real number. Search jargon: subgradient calculus.

  2. ^

    By this point someone often chimes in and says “but maybe that trade made third parties worse off!” and then someone else says “only relative to some weird choice of baseline which wasn’t a very natural baseline in the first place” and then they start arguing about free market policies and, like, I don’t give a shit. This post is not primarily about agents’ welfare, it is about equilibrium conditions.

    Also I have a whole post titled Comparative Advantage is Not About Trade, if you want to see the sort of principles in this post applied in a more explicitly economic setting while still trying to emphasize that it’s not really about interminable free market political debates.