Note: the math and the picture didn’t transfer. I may try to fix it in future, but for now you might want to just read it at the original site. [Mod/Edit note: Should all be fixed now!]
Consider games with the following payoff matrix:
One such game is the Prisoner’s Dilemma (in which strategy “Krump” is usually called “Cooperate”, and “Flitz” is usually called “Defect”). But the Prisoner’s Dilemma has additional structure. Specifically, to qualify as a PD, we must have . gives the motivation to defect if the other player cooperates, and gives that motivation if the other player defects. With these two constraints, the Nash equilibrium is always going to be Flitz/Flitz for a payoff of . is what gives the dilemma its teeth; if instead , then that equilibrium is a perfectly fine outcome, possibly the optimal one.
I usually think of a Prisoner’s Dilemma as also having . That specifies that mutual cooperation has the highest total return—it’s “socially optimal” in a meaningful sense1 - while mutual defection has the lowest. It also means you can model the “defect” action as “take some value for yourself, but destroy value in the process”. (Alternatively, “cooperate” as “give some of your value to your playmate2, adding to that value in the process”.) We might consider instead:
If , then defecting while your playmate cooperates creates value (relative to cooperating). From a social perspective, Krump/Flitz or Flitz/Krump is preferable to Krump/Krump; and in an iterated game of this sort, you’d prefer to alternate with than to get a constant . Wikipedia still classes this as a Prisoner’s Dilemma, but I think that’s dubious terminology, and I don’t think it’s standard. I might offhand suggest calling it the Too Many Cooks game. (This name assumes that you’d rather go hungry than cook, and that spoiled broth is better than no broth.)
If , then defecting while your playmate defects creates value. I have no issue thinking of this as a Prisoner’s Dilemma; my instinct is that most analyses of the central case will also apply to this.
By assigning different values to the various numbers, what other games can we get?
As far as I can tell, we can classify games according to the ordering of (which determine individual outcomes) and of (which determine the social outcomes). Sometimes we’ll want to consider the case when two values are equal, but for simplicity I’m going to classify them assuming there are no equalities. Naively there would be possible games, but
Reversing the order of everything doesn’t change the analysis, it just swaps the labels Krump and Flitz. So we can assume without loss of generality that . That eliminates half the combinations.
Obviously , so it’s just a question of where falls in comparison to them. That eliminates another half.
If then . That eliminates another four combinations.
If then , eliminating another four.
If then , eliminating four.
If then , eliminating two.
If then , eliminating two.
That brings us down to just 20 combinations, and we’ve already looked at three of them, so this seems tractable. In the following, I’ve grouped games together mostly according to how interesting I think it is to distinguish them, and I’ve given them names when I didn’t know an existing name. Both the names and the grouping should be considered tentative.
Cake Eating: (two games)
In this game, you can either Eat Cake or Go Hungry. You like eating cake. You like when your playmate eats cake. There’s enough cake for everyone, and no reason to go hungry. The only Nash equilibrium is the one where everyone eats cake, and this is the socially optimal result. Great game! We should play it more often.
(If , then if you had to choose between yourself and your playmate eating cake, you’d eat it yourself. If , then in that situation you’d give it to them. Equalities between and signify indifference to (yourself, your playmate) eating cake in various situations.)
Let’s Party: (two games)
In this game, you can either go to a Party or stay Home. If you both go to a party, great! If you both stay home, that’s cool too. If either of you goes to a party while the other stays home, you’d both be super bummed about that.
Home/Home is a Nash equilibrium, but it’s not optimal either individually or socially.
In the case , this is a pure coordination game, which doesn’t have the benefit of an obvious choice that you can make without communicating.
(Wikipedia calls this the assurance game on that page, but uses that name for the Stag Hunt on the page for that, so I’m not using that name.)
Studying For a Test: (two games)
You can either Study or Bunk Off. No matter what your playmate does, you’re better off Studying, and if you Study together you can help each other. If you Bunk Off, then it’s more fun if your playmate Bunks Off with you; but better still for you if you just start Studying.
The only Nash equilibrium is Study/Study, which is also socially optimal.
Stag hunt: (two games)
You can either hunt Stag or Hare (sometimes “Rabbit”). If you both hunt Stag, you successfully catch a stag between you, which is great. If you both hunt Hare, you each catch a hare, which is fine. You can catch a hare by yourself, but if you hunt Stag and your playmate hunts Hare, you get nothing.
This also works with . If then two people hunting Hare get in each other’s way.
The Nash equilibria are at Stag/Stag and Hare/Hare, and Stag/Stag is socially optimal. Hare/Hare might be the worst possible social result, though I think this game is usually described with .
The Abundant Commons: (five games)
You can Take some resource from the commons, or you can Leave it alone. There’s plenty of resource to be taken, and you’ll always be better off taking it. But if you and your playmate both play Take, you get in each other’s way and reduce efficiency (unless ).
If then you don’t intefere with each other significantly; the socially optimal result is also the Nash equilibrium. But if then the total cost of interfering is more than the value of resource either of you can take, and some means of coordinating one person to Take and one to Leave would be socially valuable.
If then if (for whatever reason) you Leave the resource, you’d prefer your partner Takes it. If you’d prefer them to also Leave it.
An interesting case here is and . Take/Leave and Leave/Take are social optimal, but the Leave player would prefer literally any other outcome.
Take/Take is the only Nash equilibrium.
Farmer’s Dilemma: (two games)
In this game, you can Work (pitch in to help build a mutual resource) or Shirk (not do that). If either of you Works, it provides more than its cost to both of you. Ideally, you want to Shirk while your playmate Works; but if your playmate Shirks, you’d rather Work than leave the work undone. The Nash equilibria are at Work/Shirk and Shirk/Work.
If then the socially optimal outcome is Work/Work, and a means to coordinate on that outcome would be socially useful. If , the socially optimal outcome is for one player to Work while the other Shirks, but with no obvious choice for which one of you it should be.
Also known as Chicken, Hawk/Dove and Snowdrift.
Anti-coordination: (two games)
In this game, the goal is to play a different move than your playmate. If then there’s no reason to prefer one move over another, but if they’re not equal there’ll be some maneuvering around who gets which reward. If you’re not happy with the outcome, then changing the move you play will harm your playmate more than it harms you. The Nash equilibria are when you play different moves, and these are socially optimal.
Prisoner’s Dilemma/Too Many Cooks: (three games)
Covered in preamble.
(I’m a little surprised that this is the only case where I’ve wanted to rename the game depending on the social preference of the outcomes. That said, the only other games where isn’t forced to be greater or less than are the Farmer’s Dilemma and the Abundant Commons, and those are the ones I’d most expect to want to split in future.)
I made a graph of these games. I only classified them according to ordering of (i.e. I lumped Prisoner’s Dilemma with Too Many Cooks), and I drew an edge whenever two games were the same apart from swapping two adjacent values. It looks like this:
The lines are colored according to which pair of values is swapped (red first two, blue middle two, green last two). I’m not sure we learn much from it, but I find the symmetry pleasing.
A change of basis?
I don’t want to look too deep into this right now, but here’s a transformation we could apply. Instead of thinking about these games in terms of the numbers , we think in terms of “the value of Player 2 playing Flitz over Krump”:
, the value to Player 1, if Player 1 plays Krump.
, the value to Player 2, if Player 1 plays Krump.
, the value to Player 1, if Player 1 plays Flitz.
, the value to Player 2, if Player 1 plays Flitz.
These four numbers determine , up to adding a constant value to all of them, which doesn’t change the games. For example, Prisoner’s Dilemma and Too Many Cooks both have . A Prisoner’s Dilemma also has while Too Many Cooks has .
So what happens if we start thinking about these games in terms of instead? Does this give us useful insights? I don’t know.
Of course, for these numbers to point at one of the games studied in this post, we must have . I think if you relax that constraint, you start looking into games slightly more general than these. But I haven’t thought about it too hard.
 My use of the phrase comes from Ellickson’s Order Without Law. Part of why I’m writing this is to help clarify my thinking about that book. I don’t mean to imply anything in particular by it, I just like the ring of it better than alternatives like “welfare maximizing”. ↩
 Calling them your “opponent” assumes a level of antagonism that may not be present. ↩