Most Prisoner’s Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes

I pre­vi­ously claimed that most ap­par­ent Pri­soner’s Dilem­mas are ac­tu­ally Stag Hunts. I now claim that they’re Bat­tle of the Sexes in prac­tice. I con­clude with some les­sons for fight­ing Moloch.

This post turned out es­pe­cially dense with in­fer­en­tial leaps and un­ex­plained ter­minol­ogy. If you’re con­fused, try to ask in the com­ments and I’ll try to clar­ify.

Some ideas here are due to Tsvi Ben­son-Tilsen.


(Edited to add, based on com­ments:)

Here’s a sum­mary of the cen­tral ar­gu­ment which, de­spite the lack of pic­tures, may be eas­ier to un­der­stand.

  1. Most Pri­soner’s Dilem­mas are ac­tu­ally iter­ated.

  2. Iter­ated games are a whole differ­ent game with a differ­ent ac­tion space (be­cause you can re­act to his­tory), a differ­ent pay­off ma­trix (be­cause you care about fu­ture pay­offs, not just the pre­sent), and a differ­ent set of equil­ibria.

  3. It is char­ac­ter­is­tic of PD that play­ers are in­cen­tivised to play away from the Pareto fron­tier; IE, no Pareto-op­ti­mal point is an equil­ibrium. This is not the case with iter­ated PD.

  4. It is char­ac­ter­is­tic of Stag Hunt that there is a Pareto-op­ti­mal equil­ibrium, but there is also an­other equil­ibrium which is far from op­ti­mal. This is also the case with iter­ated PD. So iter­ated PD re­sem­bles Stag Hunt.

  5. How­ever, it is fur­ther­more true of iter­ated PD that there are mul­ti­ple differ­ent Pareto-op­ti­mal equil­ibria, which benefit differ­ent play­ers more or less. Also, if play­ers don’t suc­cess­fully co­or­di­nate on one of these equil­ibria, they can end up in a worse over­all state (such as mu­tual defec­tion for­ever, due to play­ing grim-trig­ger strate­gies with mu­tu­ally in­com­pat­i­ble de­mands). This makes iter­ated PD re­sem­ble Bat­tle of the Sexes.

In fact, the Folk The­o­rem sug­gests that many iter­ated games will re­sem­ble Bat­tle of the Sexes in this way.


In a com­ment on The Schel­ling Choice is “Rab­bit”, not “Stag” I said:

In the book The Stag Hunt, Skyrms similarly says that lots of peo­ple use Pri­soner’s Dilemma to talk about so­cial co­or­di­na­tion, and he thinks peo­ple should of­ten use Stag Hunt in­stead.

I think this is right. Most prob­lems which ini­tially seem like Pri­soner’s Dilemma are ac­tu­ally Stag Hunt, be­cause there are po­ten­tial en­force­ment mechanisms available. The prob­lems dis­cussed in Med­i­ta­tions on Moloch are mostly Stag Hunt prob­lems, not Pri­soner’s Dilemma prob­lems—Scott even talks about en­force­ment, when he de­scribes the dystopia where ev­ery­one has to kill any­one who doesn’t en­force the ter­rible so­cial norms (in­clud­ing the norm of en­forc­ing).

This might ini­tially sound like good news. Defec­tion in Pri­soner’s Dilemma is an in­evitable con­clu­sion un­der com­mon de­ci­sion-the­o­retic as­sump­tions. Try­ing to es­cape mul­ti­po­lar traps with ex­otic de­ci­sion the­o­ries might seem hope­less. On the other hand, rab­bit in Stag Hunt is not an in­evitable con­clu­sion, by any means.

Un­for­tu­nately, in re­al­ity, hunt­ing stag is ac­tu­ally quite difficult. (“The schel­ling choice is Rab­bit, not Stag… and that re­ally sucks!”)

In­spired by Zvi’s re­cent se­quence on Moloch, I wanted to ex­pand on this. Th­ese is­sues are im­por­tant, since they de­ter­mine how we think about group ac­tion prob­lems /​ tragedy of the com­mons /​ mul­ti­po­lar traps /​ Moloch /​ all the other syn­onyms for the same thing.

My cur­rent claim is that most Pri­soner’s Dilem­mas are ac­tu­ally bat­tle of the sexes. But let’s first re­view the rele­vance of Stag Hunt.

Your PD Is Prob­a­bly a Stag Hunt

There are sev­eral rea­sons why an ap­par­ent Pri­soner’s Dilemma may be more of a Stag Hunt.

  • The game is ac­tu­ally an iter­ated game.

  • Rep­u­ta­tion net­works could pun­ish defec­tors and re­ward co­op­er­a­tors.

  • There are en­force­able con­tracts.

  • Play­ers know quite a bit about how other play­ers think (in the ex­treme case, play­ers can view each other’s source code).

Each of these for­mal model cre­ates a situ­a­tion where play­ers can get into a co­op­er­a­tive equil­ibrium. The challenge is that you can’t unilat­er­ally de­cide ev­ery­one should be in the co­op­er­a­tive equil­ibrium. If you want good out­comes for your­self, you have to ac­count for what ev­ery­one else prob­a­bly does. If you think ev­ery­one is likely to be in a bad equil­ibrium where peo­ple pun­ish each other for co­op­er­at­ing, then al­ign­ing with that equil­ibrium might be the best you can do! This is like hunt­ing rab­bit.

Ex­er­cize: is there a situ­a­tion in your life, or within spit­ting dis­tance, which seems like a Pri­soner’s Dilemma to you, where ev­ery­one is stuck hurt­ing each other due to bad in­cen­tives? Is it an iter­ated situ­a­tion? Could there be rep­u­ta­tion net­works which weed out bad ac­tors? Could con­tracts or con­tract-like mechanisms be used to en­courage good be­hav­ior?

So, why do we per­ceive so many situ­a­tions to be Pri­soner’s Dilemma -like rather than Stag Hunt -like? Why does Moloch sound more like each in­di­vi­d­ual is in­cen­tivized to make it worse for ev­ery­one else than ev­ery­one is stuck in a bad equil­ibrium?

Sarah Con­stan­tine writes:

A friend of mine spec­u­lated that, in the decades that hu­man­ity has lived un­der the threat of nu­clear war, we’ve de­vel­oped the as­sump­tion that we’re liv­ing in a world of one-shot Pri­soner’s Dilem­mas rather than re­peated games, and lost some of the so­cial tech­nol­ogy as­so­ci­ated with re­peated games. Game the­o­rists do, of course, know about iter­ated games and there’s some fas­ci­nat­ing re­search in evolu­tion­ary game the­ory, but the origi­nal for­mal­iza­tion of game the­ory was for the ap­pli­ca­tion of nu­clear war, and the 101-level fram­ing that most ed­u­cated lay­men hear is of­ten that one-shot is the pro­to­typ­i­cal case and re­peated games are hard to rea­son about with­out com­puter simu­la­tions.

To use board-game ter­minol­ogy, the game may be a Pri­soner’s Dilemma, but the metagame can use en­force­ment tech­niques. Ac­count­ing for en­force­ment tech­niques, the game is more like a Stag Hunt, where defect­ing is “rab­bit” and co­op­er­at­ing is “stag”.

Bat­tle of the Sexes

But this is a bit in­for­mal. You don’t sep­a­rately choose how to metagame and how to game; re­ally, your iter­ated strat­egy de­ter­mines what you do in in­di­vi­d­ual games.

So it’s more ac­cu­rate to just think of the iter­ated game. There are a bunch of iter­ated strate­gies which you can choose from.

The key differ­ence be­tween the sin­gle-shot game and the iter­ated game is that co­op­er­a­tive strate­gies, such as Tit for Tat (but in­clud­ing oth­ers), are avali­able. Th­ese strate­gies have the prop­erty that (1) they are equil­ibria—if you know the other player is play­ing Tit for Tat, there’s no rea­son for you not to; (2) if both play­ers use them, they end up co­op­er­at­ing.

A key fea­ture of Tit for Tat strat­egy is that if you do end up play­ing against a pure defec­tor, you do al­most as well as you could pos­si­bly do with them. This doesn’t sound very much like a Stag Hunt. It be­gins to sound like a Stag Hunt in which you can change your mind and go hunt rab­bit if the other per­son doesn’t show up to hunt stag with you.

Sounds great, right? We can just play one of these co­op­er­a­tive strate­gies.

The prob­lem is, there are many pos­si­ble self-en­forc­ing equil­ibria. Each player can threaten the other player with a Grim Trig­ger strat­egy: they defect for­ever the mo­ment some speci­fied con­di­tion isn’t met. This can be used to ex­tort the other player for more than just the mu­tual-co­op­er­a­tion pay­off. Here’s an illus­tra­tion of pos­si­ble out­comes, with the en­force­able fre­quen­cies in the white area:

The en­tire while area are en­force­able equil­ibria: play­ers could use a grim-trig­ger strat­egy to make each other co­op­er­ate with very close to the de­sired fre­quency, be­cause what they’re get­ting is still bet­ter than mu­tual defec­tion, even if it is far from fair, or far from the Pareto fron­tier.

Alice could be ex­tort­ing Bob by co­op­er­at­ing 2/​3rds of the time, with a grim-trig­ger threat of never co­op­er­at­ing at all. Alice would then get an av­er­age pay­off of 2⅓, while Bob would get an av­er­age pay­out of 1⅓.

In the ar­tifi­cial set­ting of Pri­soner’s Dilemma, it’s easy to say that Co­op­er­ate, Co­op­er­ate is the “fair” solu­tion, and an equil­ibrium like I just de­scribed is “Alice ex­ploit­ing Bob”. How­ever, real games are not so sym­met­ric, and so it will not be so ob­vi­ous what “fair” is. The pur­ple squig­gle high­lights the Pareto fron­tier—the space of out­comes which are “effi­cient” in the sense that no al­ter­na­tive is purely bet­ter for ev­ery­body. Th­ese out­comes may not all be fair, but they all have the ad­van­tage that no “money is left on the table”—any “im­prove­ment” we could pro­pose for those out­comes makes things worse for at least one per­son.

No­tice that I’ve also col­ored ar­eas where Bob and Alice are do­ing worse than pay­off 1. Bob can’t en­force Alice’s co­op­er­a­tion while defect­ing more than half the time; Alice would just defect. And vice versa. All of the points within the shaded re­gions have this prop­erty. So not all Pareto-op­ti­mal solu­tions can be en­forced.

Any point in the white re­gion can be en­forced, how­ever. Each player could be watch­ing the statis­tics of the other player’s co­op­er­a­tion, pre­pared to pull a grim-trig­ger if the statis­tics ever stray too far from the tar­get point. This in­cludes so-called mu­tual black­mail equil­ibria, in which both play­ers co­op­er­ate with prob­a­bil­ity slightly bet­ter than zero (while threat­en­ing to never co­op­er­ate at all if the other player de­tectably di­verges from that fre­quency). This idea—that ‘al­most any’ out­come can be en­forced—is known as the Folk The­o­rem in game the­ory.

The Bat­tle of the Sexes part is that (par­tic­u­larly with grim-trig­ger en­force­ment) ev­ery­one has to choose the same equil­ibrium to en­force; oth­er­wise ev­ery­one is stuck play­ing defect. You’d rather be in even a bad mu­tual-black­mail type equil­ibrium, as op­posed to se­lect­ing in­com­pat­i­ble points to en­force. Just like, in Bat­tle of the Sexes, you’d pre­fer to meet to­gether at any venue rather than end up at differ­ent places.

Fur­ther­more, I would claim that most ap­par­ent Stag Hunts which you en­counter in real life are ac­tu­ally bat­tle-of-the-sexes, in the sense that there are many differ­ent stags to hunt and it isn’t im­me­di­ately clear which one should be hunted. Each stag will be differ­ently ap­peal­ing to differ­ent peo­ple, so it’s difficult to es­tab­lish com­mon knowl­edge about which one is worth go­ing af­ter to­gether.

Ex­er­cize: what stags aren’t you hunt­ing with the peo­ple around you?

Tak­ing Pareto Improvements

For­tu­nately, Grim Trig­ger is not the only en­force­ment mechanism which can be used to build an equil­ibrium. Grim Trig­ger cre­ates a crisis in which you’ve got to guess which equil­ibrium you’re in very quickly, to avoid an­ger­ing the other player; and no ex­per­i­men­ta­tion is al­lowed. There are much more for­giv­ing strate­gies (and con­trite ones, too, which helps in a differ­ent way).

Ac­tu­ally, even us­ing Grim Trig­ger to en­force things, why would you pun­ish the other player for do­ing some­thing bet­ter for you? There’s no mo­tive for pun­ish­ing the other player for rais­ing their co­op­er­a­tion fre­quency.

In a sce­nario where you don’t know which Grim Trig­ger the other player is us­ing, but you don’t think they’ll pun­ish you for co­op­er­at­ing more than the tar­get, a nat­u­ral re­sponse is for both play­ers to just co­op­er­ate a bunch.

So, it can be very valuable to use en­force­ment mechanisms which al­low for Pareto im­prove­ments.

Tak­ing Pareto im­prove­ments is about mov­ing from the mid­dle to the bound­ary:

(I’ve in­di­cated the di­rec­tions for Pareto im­prove­ments start­ing from the ori­gin in yel­low, as well as what hap­pens in other di­rec­tions; also, I drew a bunch of ex­am­ple Pareto im­prove­ments as black ar­rows to illus­trate how Pareto im­prove­ments are awe­some. Some of the black ar­rows might not be perfectly within the range of Pareto im­prove­ments, sorry about that.)

How­ever, there’s also an ar­gu­ment against tak­ing Pareto im­prove­ments. If you ac­cept any Pareto im­prove­ments, you can be ex­ploited in the sense men­tioned ear­lier—you’ll ac­cept any situ­a­tion, so long as it’s not worse for you than where you started. So you will take some pretty poor deals. No­tice that one Pareto im­prove­ment can pre­vent a differ­ent one—for ex­am­ple, if you move to (1/​2, 1), then you can’t move to (1,1/​2) via Pareto im­prove­ment. So you could always re­ject a Pareto im­prove­ment be­cause you’re hold­ing out for a bet­ter deal. (This is the Bat­tle of the Sexes as­pect of the situ­a­tion—there are Pareto-op­ti­mal out­comes which are bet­ter or worse for differ­ent peo­ple, so, it’s hard to agree on which im­prove­ment to take.)

That’s where Co­op­er­a­tion be­tween Agents with Differ­ent No­tions of Fair­ness comes in. The idea in that post is that you don’t take just any Pareto im­prove­ment—you have stan­dards of fair­ness—but you don’t just com­pletely defect for less-than-perfectly-fair deals, ei­ther. What this means is that two such agents with in­com­pat­i­ble no­tions of fair­ness can’t get all the way to the Pareto fron­tier, but the closer their no­tions of fair­ness are to each other, the closer they can get. And, if the no­tions of fair­ness are com­pat­i­ble, they can get all the way.

Moloch is the Folk Theorem

Be­cause of the Folk The­o­rem, most iter­ated games will have the same prop­er­ties I’ve been talk­ing about (not just iter­ated PD). Speci­fi­cally, most iter­ated games will have:

  1. Stag-hunt-like prop­erty 1: There is a Pareto-op­ti­mal equil­ibrium, but there is also an equil­ibrium far from Pareto-op­ti­mal.

  2. The bat­tle-of-the-sexes-like prop­erty: There are mul­ti­ple Pareto-op­ti­mal equil­ibria, so that even if you’re try­ing to co­op­er­ate, you don’t nec­es­sar­ily know which one to aim for; and, differ­ent op­tions fa­vor differ­ent peo­ple, mak­ing it a com­plex ne­go­ti­a­tion even if you can dis­cuss the prob­lem ahead of time.

There’s a third im­por­tant prop­erty which I’ve been as­sum­ing, but which doesn’t fol­low so di­rectly from the Folk The­o­rem: the sub­op­ti­mal equil­ibrium is “safe”, in that you can unilat­er­ally play that way to get some guaran­teed util­ity. The Pareto-op­ti­mal equil­ibria are not similarly safe; mis­tak­enly play­ing one of them when other peo­ple don’t can be worse than the “safe” guaran­tee from the poor equil­ibrium.

A game with all three prop­er­ties is like Stag Hunt with mul­ti­ple stags (where you all must hunt the same stag to win, but can hunt rab­bit alone for a guaran­teed mediocre pay­off), or bat­tle of the sexes where you can just stay home (you’d rather stay home than go out alone).

Les­sons in Slay­ing Moloch

0. I didn’t even ad­dress this in this es­say, but it’s worth men­tion­ing: not all con­flicts are zero-sum. In the in­tro­duc­tion to the 1980 edi­tion of The Strat­egy of Con­flict, Thomas Schel­ling dis­cusses the re­cep­tion of the book. He re­calls that a promi­nent poli­ti­cal the­o­rist “ex­claimed how much this book had done for his think­ing, and as he talked with en­thu­si­asm I tried to guess which of my so­phis­ti­cated ideas in which chap­ters had made so much differ­ence to him. It turned out it wasn’t any par­tic­u­lar idea in any par­tic­u­lar chap­ter. Un­til he read this book, he had sim­ply not com­pre­hended that an in­her­ently non-zero-sum con­flict could ex­ist.”

1. In situ­a­tions such as iter­ated games, there’s no in-prin­ci­ple pull to­ward defec­tion. Pri­soner’s Dilemma seems para­dox­i­cal when we first learn of it (at least, it seemed so to me) be­cause we are not ac­cus­tomed to such a harsh di­vide be­tween in­di­vi­d­ual in­cen­tives and the com­mon good. But per­haps, as Sarah Con­stan­tine spec­u­lated in Don’t Shoot the Mes­sen­ger, mod­ern game the­ory and eco­nomics have con­di­tioned us to be used to this con­flict due to their em­pha­sis on sin­gle-shot in­ter­ac­tions. As a re­sult, Moloch comes to sound like an in­evitable grav­ity, pul­ling ev­ery­thing down­wards. This is not nec­es­sar­ily the case.

2. In­stead, most col­lec­tive ac­tion prob­lems are bar­gain­ing prob­lems. If a solu­tion can be agreed upon, we can gen­er­ally use weak en­force­ment mechanisms (so­cial norms) or strong en­force­ment (cen­tral­ized gov­ern­men­tal en­force­ment) to carry it out. But, agree­ing about the solu­tion may not be easy. The more par­ties in­volved, the more difficult.

3. Try to keep a path open to­ward bet­ter solu­tions. Since wide adop­tion of a par­tic­u­lar solu­tion can be such an im­por­tant prob­lem, there’s a ten­dency to treat al­ter­na­tive solu­tions as the en­emy. This bars the way to fur­ther progress. (One could loosely char­ac­ter­ize this as the differ­ence be­tween re­li­gious doc­trine and demo­cratic law; re­li­gious doc­trine trades away the abil­ity to im­prove in fa­vor of the more pow­er­ful con­sen­sus-reach­ing tech­nol­ogy of im­mutable uni­ver­sal law. But of course this over­sim­plifies things some­what.) Keep­ing a path open for im­prove­ments is hard, partly be­cause it can cre­ate ex­ploita­bil­ity. But it keeps us from get­ting stuck in a poor equil­ibrium.