[Question] When does adding more people reliably make a system better?

Prediction markets have a remarkable property. They reward correct contrarianism. They incentivise people to disagree with the majority consensus, and be right. If you add more traders to a market, in expectation they price will be more accurate.

More traders means both more fish and more sharks.

(The movie “The Big Short” might be a very sad portrait of the global financial system. But it’s still the case that a system in a bad equilibrium with deeply immoral consequences rewarded the outcasts who pointed out those consequences with billions of dollars. Even though socially, no one bothered listening to them, including the US Government who ignored requests by one of the fund managers to share his expertise about the events after the crash.)

Lots of things we care about don’t have this property.

  • Many social communities decline as more members join, and have to spend huge amounts of effort building institutions and rituals to prevent this.

  • Many companies have their culture decline as they hire more, and have to spend an incredible amount of resources simply to prevent this (which is far from getting better as more people join). (E.g. big tech companies can probably have >=5 candidates spend >=10 hours in interviews for a a single position. And that’s not counting the probably >=50 candidates for that position spending >=1h.)

  • Online forums usually decline with growing user numbers (this happened to Reddit, HackerNews, as well as LessWrong 1.0).

In prediction markets the vetting process is really cheap. You might have to do some KYC, but mostly new people is great. This seems like a really imporant property for a system to have, and something we could learn from to build other such systems.

What other systems have this property?