Fairness vs. Goodness

It seems that back when the Pri­soner’s Dilemma was still be­ing worked out, Mer­rill Flood and Melvin Drescher tried a 100-fold iter­a­tive PD on two smart but un­pre­pared sub­jects, Ar­men Alchian of UCLA and John D. Willi­ams of RAND.

The kicker be­ing that the pay­off ma­trix was asym­met­ri­cal, with dual co­op­er­a­tion award­ing JW twice as many points as AA:

(AA, JW)JW: DJW: C
AA: D(0, 0.5)(1, −1)
AA: C(-1, 2)(0.5, 1)

The re­sult­ing 100 iter­a­tions, with a log of com­ments writ­ten by both play­ers, make for fas­ci­nat­ing read­ing.

JW spots the pos­si­bil­ities of co­op­er­a­tion right away, while AA is slower to catch on.

But once AA does catch on to the pos­si­bil­ities of co­op­er­a­tion, AA goes on throw­ing in an oc­ca­sional D… be­cause AA thinks the nat­u­ral meet­ing point for co­op­er­a­tion is a fair out­come, where both play­ers get around the same num­ber of to­tal points.

JW goes on try­ing to en­force (C, C) - the op­tion that max­i­mizes to­tal util­ity for both play­ers—by pun­ish­ing AA’s at­tempts at defec­tion. JW’s log shows com­ments like “He’s crazy. I’ll teach him the hard way.”

Mean­while, AA’s log shows com­ments such as “He won’t share. He’ll pun­ish me for try­ing!”

I con­fess that my own sym­pa­thies lie with JW, and I don’t think I would have played AA’s game in AA’s shoes. This would seem to in­di­cate that I’m more of a util­i­tar­ian than a fair-i-tar­ian. Life doesn’t always hand you fair games, and the best we can do for each other is play them pos­i­tive-sum.

Though I might have been some­what more sym­pa­thetic to AA, if the (C, C) out­come had ac­tu­ally lost him points, and only (D, C) had made it pos­si­ble for him to gain them back. For ex­am­ple, this is also a Pri­soner’s Dilemma:

(AA, JW)JW: DJW: C
AA: D(-2, 2)(2, 0)
AA: C(-5, 6)(-1, 4)

The­o­ret­i­cally, of course, util­ity func­tions are in­var­i­ant up to af­fine trans­for­ma­tion, so a util­ity’s ab­solute sign is not mean­ingful. But this is not always a good metaphor for real life.

Of course what we want in this case, so­cietally speak­ing, is for JW to slip AA a bribe un­der the table. That way we can max­i­mize so­cial util­ity while let­ting AA go on mak­ing a profit. But if AA starts out with a nega­tive num­ber in (C, C), how much do we want AA to de­mand in bribes—from our global, so­cietal per­spec­tive?

The whole af­fair makes for an in­ter­est­ing re­minder of the differ­ent wor­ld­views that peo­ple in­vent for them­selves—seem­ing so nat­u­ral and uniquely ob­vi­ous from the in­side—to make them­selves the heroes of their own sto­ries.