[Question] What are some real life Inadequate Equilibria?

I would like to compile an as-comprehensive-as-possible list of known Inadequate Equilibria /​ large scale coordination problems.

An Inadequate Equilibrium is a situation in which a community, an institution, or society at large is in a bad Nash equilibrium. The group as a whole has some sub-optimal set of norms and it would be better off with a different set of norms, but there’s no individual actor who has both the power and the incentive to change the norms for the group. So the bad equilibrium persists.

Eliezer offers the following more specific criteria:

  1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;

  2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information; and

  3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

I want to generate as many real-life examples of this phenomenon as possible. Help me generate some?

The goal here is quantity. Originality is not required.

As an additional incentive, I’ll award $100 to the user who generates the most examples. (In order to count, each example has to have enough detail, or link to a detailed enough explanation, that I can understand how it is an inadequate equilibrium, and not just an unfortunate state of the world.) And I’ll give $50 to the user who produces the second largest number.

I’ll prime the pump with a few (LessWrong-centric) examples.

Widespread adoption of prediction markets

It seems like our world would make saner decisions were prediction markets commonly used and commonly consulted. Making the switch from not relying on prediction markets to relying on prediction markets is fraught, because it might embarrass the leadership of existing institutions by revealing that their professed estimates are not very credible.

As Robin Hanson said in a recent interview,

I’d say if you look at the example of cost accounting, you can imagine a world where nobody does cost accounting. You say of your organization, “Let’s do cost accounting here.”

That’s a problem because you’d be heard as saying, “Somebody around here is stealing and we need to find out who.” So that might be discouraged.

In a world where everybody else does cost accounting, you say, “Let’s not do cost accounting here.” That will be heard as saying, “Could we steal and just not talk about it?” which will also seem negative.

Similarly, with prediction markets, you could imagine a world like ours where nobody does them, and then your proposing to do it will send a bad signal. You’re basically saying, “People are bullshitting around here. We need to find out who and get to the truth.”

But in a world where everybody was doing it, it would be similarly hard not to do it. If every project with a deadline had a betting market and you say, “Let’s not have a betting market on our project deadline,” you’d be basically saying, “We’re not going to make the deadline, folks. Can we just set that aside and not even talk about it?”

Moving from proprietary journals to open-access journals

There’s broad agreement that it is better for science to be open, and for anyone to be able to access scientific papers.

Unfortunately, scientific publishing is currently dominated by a cabal of journals that can gatekeep access to most scientific papers, charging people (or institutions) for the right to access them.

Individual scientists might prefer to publish in open-access journals instead, but to do so unilaterally means taking a career-hit, because the most prestigious journals are not-open access, and publishing in prestigious journals is an important competent of the academic signaling game. So, modulo coordinated action, scientists in most fields are incentivized to publish in closed journals instead of open ones.

Using Bayesian Methods instead of Frequentist stats

The standard statistical method of hypothesis testing, or testing for statistical significance is fraught with problems. Many of these problems could be avoided if scientists switched to reporting “likelyhoods” instead of p-values. [See the arbital page on this topic.]

But again, for any individual scientist, using a non-standard statistical methodology that is unfamiliar to others in their field is damaging to their career prospects: their papers are less likely to be accepted by journals and less likely to be cited. So no individual scientist benefits from unilaterally switching, even if the any given field would benefit if everyone switched.

Do you guys have more?

[Edit 6:45 PM]:
To be counted, every entry should include:

  1. How things are currently, and why that’s bad.

  2. How they could be instead, and why that’s better.

  3. What’s blocking the transition from 1 to 2.

It’s not sufficient to point a place where things are a mess, if there isn’t a clear, stable, alternative. And it isn’t sufficient to point out a way that things could be better without an explanation for why the change hasn’t already taken place.