Fairness vs. Goodness

It seems that back when the Prisoner’s Dilemma was still being worked out, Merrill Flood and Melvin Drescher tried a 100-fold iterative PD on two smart but unprepared subjects, Armen Alchian of UCLA and John D. Williams of RAND.

The kicker being that the payoff matrix was asymmetrical, with dual cooperation awarding JW twice as many points as AA:

(AA, JW)JW: DJW: C
AA: D(0, 0.5)(1, −1)
AA: C(-1, 2)(0.5, 1)

The resulting 100 iterations, with a log of comments written by both players, make for fascinating reading.

JW spots the possibilities of cooperation right away, while AA is slower to catch on.

But once AA does catch on to the possibilities of cooperation, AA goes on throwing in an occasional D… because AA thinks the natural meeting point for cooperation is a fair outcome, where both players get around the same number of total points.

JW goes on trying to enforce (C, C) - the option that maximizes total utility for both players—by punishing AA’s attempts at defection. JW’s log shows comments like “He’s crazy. I’ll teach him the hard way.”

Meanwhile, AA’s log shows comments such as “He won’t share. He’ll punish me for trying!”

I confess that my own sympathies lie with JW, and I don’t think I would have played AA’s game in AA’s shoes. This would seem to indicate that I’m more of a utilitarian than a fair-i-tarian. Life doesn’t always hand you fair games, and the best we can do for each other is play them positive-sum.

Though I might have been somewhat more sympathetic to AA, if the (C, C) outcome had actually lost him points, and only (D, C) had made it possible for him to gain them back. For example, this is also a Prisoner’s Dilemma:

(AA, JW)JW: DJW: C
AA: D(-2, 2)(2, 0)
AA: C(-5, 6)(-1, 4)

Theoretically, of course, utility functions are invariant up to affine transformation, so a utility’s absolute sign is not meaningful. But this is not always a good metaphor for real life.

Of course what we want in this case, societally speaking, is for JW to slip AA a bribe under the table. That way we can maximize social utility while letting AA go on making a profit. But if AA starts out with a negative number in (C, C), how much do we want AA to demand in bribes—from our global, societal perspective?

The whole affair makes for an interesting reminder of the different worldviews that people invent for themselves—seeming so natural and uniquely obvious from the inside—to make themselves the heroes of their own stories.