The Tragedy of the Anticommons

I assume that most of you are familiar with the concept of the Tragedy of the Commons. If you aren’t, well, that was a Wikipedia link right there.

However, fewer are familiar with the Tragedy of the Anticommons, a term coined by Michael Heller. Where the Tragedy of the Commons is created by too little ownership, the Tragedy of the Anticommons is created by too much.

For instance, the classical solution to the TotC is to divide up the commons between the herders using it, giving each of them ownership for a particular part. This gives each owner an incentive to enforce its sustainability. But what would happen if the commons were divided up to thousands of miniature pieces, say one square inch each? In order to herd your cattle, you’d have to acquire permission from hundreds of different owners. Not only would this be a massive undertaking by itself, any one of them could say no, potentially ruining your entire attempt.

This isn’t just a theoretical issue. In his book, Heller offers numerous examples, such as this one:

…gridlock prevents a promising treatment for Alzheimer’s diseases being tested. The head of research at a “Big Pharma” drugmaker told me that his lab scientists developed the potential cure (call it Compound X) years ago, but biotech competitors blocked its development. … the company developing Compound X needed to pay every owner of a patent relevant to its testing. Ignoring even one would invite an expensive and crippling lawsuit. Each patent holder viewed its own discovery as the crucial one and demanded a corresponding fee, until the demands exceeded the drug’s expected profits. None of the patent owners would yield first. …
This story does not have a happy ending. No valiant patent bundler came along. Because the head of research could not figure out how to pay off all the patent owners and still have a good chance of earning a profit, he shifted his priorities to less ambitious options. Funding went to spin-offs of existing drugs for which his firm already controlled the underlying patents. His lab reluctantly shelved Compound X even though he was certain the science was solid, the market huge, and the potential for easing human suffering beyond measure.

Patents aren’t the only field affected by this tragedy. America’s airports are unnecessarily congested because land owners block all attempts to build new airports. 90% of the US broadcast spectrum goes unused, because in order to build national coverage, you’d need to apply for permission in 734 separate areas. Re-releasing an important documentary required a 600 000 dollar donation and negotiations that stretched over 20 years, because there were so many different copyright owners for all the pictures and music used in the documentary.

So, what does all of this have to do with rationality, and why am I bringing it up here?

The interesting thing about the tragedy of the anticommons is that most people will be entirely blind to it. Patent owners block drug development, and the only people who’ll know are the owners in question, as well as the people who tried to develop the drug. A documentary doesn’t get re-released? A few people might wonder why the documentary they saw 20 years ago isn’t available on DVD anywhere, but aside for that, nobody’ll know. If you’re not even aware that a problem exists, then you can’t fix it.

In general, something not happening is much harder to spot than something happening. Heller remarks that even the term “underuse” hasn’t existed for very long:

According to the OED, underuse is a recent coinage. In its first recorded appearance, in 1960, the word was hedged about with an anxious hyphen and scare quotes: “There might, in some places, be considerable ‘under-use’ of [parking] meters.” By 1970, copy editors felt sufficiently comfortable to cast aside the quotes: “A country can never recover by persistently under-using its resources, as Britain has done for too long.” The hyphen began to disappear around 1975.

This gives an interesting case study for rationality. If ‘underuse’ didn’t become a word until 1960, that implies that people have been blind to its damages until then. Heller speculates:

In the OED, this new word means “to use something below the optimum” and “insufficient use”. The reference to an “optimum” suggests to me how underuse entered English. It was, I think, an unintended consequence of the increasing role of cost-benefit analysis in public policy debates. …In the old world of overuse versus ordinary use, our choices were binary and clear-cut: injury or health, waste or efficiency, bad or good. In the new world, we are looking for something more subtle—an “optimum” along a continuum. Looking for an optimal level of use has a surprising twist: it requires a concept of underuse and surreptitiously changes the long-standing meaning of overuse. Like Goldilocks, we are looking for something not too hot, not too cold, not too much or too little—just right. …
How can we know whether we are overusing, underusing, or optimally using resources? It’s not easy, and not just a matter of economic analysis. Consider, for example, the public health push to icnrease the use of “statins”, drugs such as Liptor that help lower cholesterol. Underuse of statins may mean too many heart attacks and strokes. But no one suggests that everyone should take statins: putting the drug int he water supply would be overuse. So what is the optimal level of use? … We estimate the cost of the drugs, we assign a dollar value to death and disease averted, and quantify the negative effects of increased use.
Driving faster gets you home sooner but increases your chance of crashing. Is the trade-off worthwhile? To answer that question, you need to know how to value life. If life were beyond value, we would require perfect auto safety, cars would be infinitely expensive, and car use would drop to nothing. But if there is too little safety regulation, too many will die. With auto safety, society faces aother Goldilocks’ quest: we strive to ensure that, all things considered, cars kill the optimal amount of people. It sounds callous, but that’s what an optimum is all about.
… The possibility of underuse reorients policymaking from relatively simple either-or choices to the more contentious trade-offs that make up modern regulation of risk.

There are several lessons one could draw here.

One is that we’re biased to be blind to underuse, whereas overuse is much easier to spot.

Another is the more general case: we should be careful to look for hidden effects in any policies we institute or actions we take. Even if there seems to be no damage, there may actually be. How can we detect such hidden effects? Cost-benefit calculations looking for the optimal level of use are one way, though that’s time-consuming and will only help detect under/​overuse.

Not to mention that it’s hard, requiring us to set a value on lives and other moral questions. That brings us to the second lesson. Heller’s analysis implies that for a long time, people were actually blind to the possibility of underuse because they were reluctant to really tackle the hard problems. Refuse to assign a value on life? Then you can’t engage in cost-benefit analysis… and as a result, you’ll stay blind to the whole concept of underuse. If you’d looked at things objectively, you’d have seen that we need to give lives a value in order to make decisions in society. By refusing to do so, you’ll stay blind to a huge class of problems ever after, simply because you didn’t want to objectively ponder hard questions.

That reinforces a message Eliezer’s been talking about. I doubt anybody could have foreseen that by refusing to put a value on life, they’d fail to discover the concept of underuse, and thereby have difficulty noticing the risk of patents blocking the development of new drugs. If you let your beliefs get in the way of your rationality in even one thing, it may end up hurting you in entirely unexpected ways. Don’t do it.

(There are also some other lessons, which I realized after typing out this post… can you come up with them yourself?)