Maybe we’re not doomed

This is prompted by Scott’s ex­cel­lent ar­ti­cle, Med­i­ta­tions on Moloch.

I might car­i­ca­ture (grossly un­fairly) his post like this:

  1. Map some cen­tral prob­lems for hu­man­ity onto the tragedy of the com­mons.

  2. Game the­ory says we’re doomed.

Of course my life is pretty nice right now. But, goes the story, this is just a non-equil­ibrium start­ing pe­riod. We’re in­ex­orably pro­gress­ing to­wards a mis­er­able Nash equil­ibrium, and once we get there we’ll be doomed for­ever. (This for­ever loses a bit of forever­ness if one ex­pects ev­ery­thing to get in­ter­rupted by self-im­prov­ing AI, but let’s elide that.)
There are a few ways we might not be doomed. The first and less likely is that peo­ple will just de­cide not to go to their doom, even though it’s the Nash equil­ibrium. To give a to­tally crazy ex­am­ple, sup­pose there were two coun­tries play­ing a game where the first one to launch mis­siles had a huge ad­van­tage. And nei­ther coun­try trusts the other, and there are mul­ti­ple false alarms—thus push­ing the situ­a­tion to the sta­ble Nash equil­ibrium of both coun­tries try­ing to launch first. Ex­cept imag­ine that some­how, through some heroic spasm of in­san­ity, these two coun­tries just de­cided not to nuke each other. That’s the sort of thing it would take.
Of course, peo­ple are rarely able to be that in­sane, so suc­cess that way should not be counted on. But on the other hand, if we’re doomed for­ever such events will even­tu­ally oc­cur—like a bub­ble of spon­ta­neous low en­tropy spawn­ing in­tel­li­gent life in a steady-state uni­verse.
The sec­ond and most already-im­ple­mented way is to jump out­side the sys­tem and change the game to a non-doomed one. If peo­ple can’t share the com­mons with­out defect­ing, why not por­tion it up into pri­vate prop­erty? Or in­sti­tute gov­ern­ment reg­u­la­tions? Or iter­ate the game to fa­vor tit-for-tat strate­gies? Each of these changes has costs, but if the wage of the cur­rent game is ‘doom,’ each player has an in­cen­tive to change the game.
Scott de­votes a sub-ar­gu­ment to why we’re still doomed to things be mis­er­able if we solve co­or­di­na­tion prob­lems with gov­ern­ment:
  1. In­cen­tives for gov­ern­ment em­ploy­ees some­times don’t match the needs of the peo­ple.

  2. This has costs, and those costs help ex­plain why some things that suck, suck.

I agree with this, but not all gov­ern­ments are equally costly as co­or­di­na­tion tech­nolo­gies. Heck, not all gov­ern­ments even are a tech­nol­ogy for im­prov­ing peo­ples’ lives—look at North Korea. My point is that there’s no par­tic­u­lar rea­son that costs can’t be small, with suffi­ciently ad­vanced cul­tural tech­nol­ogy.
More in­ter­est­ing to me than gov­ern­ment is the idea of iter­at­ing a game to to en­courage co­op­er­a­tion. In the nor­mal pris­oner’s dilemma game, the only Nash equil­ibrium is defect-defect and so the pris­on­ers are doomed. But if you have to play the pris­oner’s dilemma game re­peat­edly, with a va­ri­ety of other play­ers, the best strat­egy turns out to be a largely co­op­er­a­tive one. This eva­sion of doom gives ev­ery player an in­cen­tive to try and re­place one-shot dilem­mas with iter­ated ones. Could Scott’s post look like this?
  1. Map some cen­tral prob­lems for hu­man­ity onto the iter­ated pris­oner’s dilemma.

  2. Evolu­tion­ary game the­ory says we’re not doomed.

In short, I think this idea of “if you know the Nash equil­ibrium sucks, ev­ery­one will help you change the game” is an im­por­tant one. Though given hu­man ir­ra­tional­ity, game-the­o­retic pre­dic­tions (whether of even­tual doom or non-doom) should be taken less than liter­ally.