Death by Red Tape

Contains spoilers for the worldbuilding of Vernor Vinge’s “Zones of Thought” universe.

Based on Eliezer’s vision of the present from 1900.


In the Zones of Thought universe, there is a cycle of civilization: civilizations rise from stone-age technology, gradually accumulating more technology, until they reach the peak of technological possibility. At that point, the only way they can improve society is by over-optimizing it for typical cases, removing slack. Once society has removed its slack, it’s just a matter of time until unforeseen events force the system slightly outside of its safe parameters. This sets off a chain reaction: like dominoes falling, the failure of one subsystem causes the failure of another and another. This catastrophe either kills everyone on the planet, or sets things so far back that society has to start from scratch.

Vernor Vinge was writing before Nassim Taleb, but if not for that, this could well be interpreted as a reference to Taleb’s ideas. Taleb mocks the big players on the stock market for betting on the typical case, and taking huge losses when “black swans” (unexpected/​unanticipatable events) occur. (Taleb makes money on the stock market by taking the opposite side of these bets, betting on the unknown unknowns.)

Taleb ridicules Bayesians for their tendency to rely on oversimplified models which assume the future will look like the past. Instead he favors Popperian epistemology and ergodicity economics.

Indeed, naive Bayesians do run the risk of over-optimizing, eliminating slack, and overly assuming that the future will be like the past. On a whole-society level, it makes sense that this kind of thinking could eventually lead to catastrophe (and plausibly already has, in the form of the 2008 housing crash).

However, human nature and modern experience leads me to think that the opposite failure mode might be more common.

Taleb advises us to design “antifragile” systems which, like him, bet on the atypical and get stronger through failure. This means designing systems with lots of slack, modularity, redundancy, and multiple layers (think of a laptop, which has a hard chassis to protect and support the vital electronics, & then often has a moderately hard plastic protective case, and then is transported in a soft outer case of some kind). It means responding to black swans by building new systems which mitigate, or (even better) take advantage of, the new phenomena.

But when I look around at society (at least, through my Bayesian-biased lens) I see it doing too much of that.

  • The over-cautious FDA seemingly kills a lot more people on average (compared to a less-cautious alternative) in the name of avoiding risks of severe unanticipated drug side-effects. And people are largely comforted by this. A typical healthy individual would prefer (at least in the short term) to be very sure that the few drugs they need are safe, as opposed to having a wider selection of drugs.

  • In response to the 9/​11 attacks, the government spend huge amounts of money on the TSA and other forms of security. It’s possible that this has been a huge waste of money. (The TSA spends 5.3 billion on airline security annually. It’s difficult to put a price on 9/​11, but quick googling says that total insurance payouts were $40 billion. So very roughly, the utilitarian question is whether the TSA stops a 9/​11-scale attack every 8 years.) On the other hand, many people are probably glad for the TSA even if the utilitarian calculation doesn’t work out.

  • Requiring a license or more education may be an attempt to avoid the more extreme negative outcomes; for example, I don’t know the political motivations which led to requiring licenses for taxi drivers or hairdressers, but I imagine vivid horror stories were required to get people sufficiently motivated.

  • Regulation has a tendency to respond to extreme events like this, attempting to make those outcomes impossible while ignoring how much value is being sacrificed in typical outcomes. Since people don’t really think in numbers, the actual frequency of extreme events is probably not considered very heavily.

Keep in mind that it’s perfectly consistent for there to be lots of examples of both kinds of failure (lots of naive utilitarianism which ignores unknown unknowns, and lots of overly risk-averse non-utilitarianism). A society might even die from a combination of both problems at once. I’m not really claiming that I’d adjust society’s bias in a specific direction; I’d rather have an improved ability to avoid both failure modes.

But just as Vernor Vinge painted a picture of slow death by over-optimization and lack of slack, we can imagine a society choking itself to death with too many risk-averse regulations. It’s harder to reduce the number of laws and regulations than it is to increase them. Extreme events, or fear of extreme events, create political will for more precautions. This creates a system which punishes action.

One way I sometimes think of civilization is as a system of guardrails. No one puts up a guardrail if no one has gotten hurt. But if someone has gotten hurt, society is quick to set up rails (in the form of social norms, or laws, or physical guardrails, or other such things). So you can imagine the physical and social/​legal/​economic landscape slowly being tiled with guardrails which keep everything within safe regions.

This, of course, has many positive effects. But the landscape can also become overly choked with railing (and society’s budget can be strained by the cost of rail maintenance).