Maybe we’re not doomed

This is prompted by Scott’s excellent article, Meditations on Moloch.

I might caricature (grossly unfairly) his post like this:

  1. Map some central problems for humanity onto the tragedy of the commons.

  2. Game theory says we’re doomed.

Of course my life is pretty nice right now. But, goes the story, this is just a non-equilibrium starting period. We’re inexorably progressing towards a miserable Nash equilibrium, and once we get there we’ll be doomed forever. (This forever loses a bit of foreverness if one expects everything to get interrupted by self-improving AI, but let’s elide that.)
There are a few ways we might not be doomed. The first and less likely is that people will just decide not to go to their doom, even though it’s the Nash equilibrium. To give a totally crazy example, suppose there were two countries playing a game where the first one to launch missiles had a huge advantage. And neither country trusts the other, and there are multiple false alarms—thus pushing the situation to the stable Nash equilibrium of both countries trying to launch first. Except imagine that somehow, through some heroic spasm of insanity, these two countries just decided not to nuke each other. That’s the sort of thing it would take.
Of course, people are rarely able to be that insane, so success that way should not be counted on. But on the other hand, if we’re doomed forever such events will eventually occur—like a bubble of spontaneous low entropy spawning intelligent life in a steady-state universe.
The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can’t share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is ‘doom,’ each player has an incentive to change the game.
Scott devotes a sub-argument to why we’re still doomed to things be miserable if we solve coordination problems with government:
  1. Incentives for government employees sometimes don’t match the needs of the people.

  2. This has costs, and those costs help explain why some things that suck, suck.

I agree with this, but not all governments are equally costly as coordination technologies. Heck, not all governments even are a technology for improving peoples’ lives—look at North Korea. My point is that there’s no particular reason that costs can’t be small, with sufficiently advanced cultural technology.
More interesting to me than government is the idea of iterating a game to to encourage cooperation. In the normal prisoner’s dilemma game, the only Nash equilibrium is defect-defect and so the prisoners are doomed. But if you have to play the prisoner’s dilemma game repeatedly, with a variety of other players, the best strategy turns out to be a largely cooperative one. This evasion of doom gives every player an incentive to try and replace one-shot dilemmas with iterated ones. Could Scott’s post look like this?
  1. Map some central problems for humanity onto the iterated prisoner’s dilemma.

  2. Evolutionary game theory says we’re not doomed.

In short, I think this idea of “if you know the Nash equilibrium sucks, everyone will help you change the game” is an important one. Though given human irrationality, game-theoretic predictions (whether of eventual doom or non-doom) should be taken less than literally.