The Schelling Choice is “Rabbit”, not “Stag”

Fol­lowup/​dis­til­la­tion/​al­ter­nate-take on Dun­can Sa­bien’s Dragon Army Ret­ro­spec­tive and Open Prob­lems in Group Ra­tion­al­ity.


There’s a par­tic­u­lar failure mode I’ve wit­nessed, and fallen into my­self:

I see a prob­lem. I see, what seems to me, to be an ob­vi­ous solu­tion to the prob­lem. If only ev­ery­one Took Ac­tion X, we could Fix Prob­lem Z. So I start X-ing, and maybe talk­ing about how other peo­ple should start X-ing. Ac­tion X takes some effort on my part but it’s ob­vi­ously worth it.

And yet… no­body does. Or not enough peo­ple do. And a few months later, here I’m still tak­ing Ac­tion X and feel­ing burned and frus­trated.

Or –

– the prob­lem is that ev­ery­one is tak­ing Ac­tion Y, which di­rectly causes Prob­lem Z. If only ev­ery­one would stop Y-ing, Prob­lem Z would go away. Ac­tion Y seems ob­vi­ously bad, clearly we should be on the same page about this. So I start not­ing to peo­ple when they’re do­ing Ac­tion Y, and ex­pect them to stop.

They don’t stop.

So I start sub­tly so­cially pun­ish­ing them for it.

They don’t stop. What’s more… now they seem to be pun­ish­ing me.

I find my­self get­ting frus­trated, per­haps an­gry. What’s go­ing on? Are peo­ple wrong-and-bad? Do they have wrong-and-bad be­liefs?

Alas. So far in my ex­pe­rience it hasn’t been that sim­ple.


A re­cap of ‘Rab­bit’ vs ‘Stag’

I’d been plan­ning to write this post for years. Dun­can Sa­bien went ahead and wrote it be­fore I got around to it. But, Dragon Army Ret­ro­spec­tive and Open Prob­lems in Group Ra­tion­al­ity are both lengthy posts with a lot of points, and it still seemed worth high­light­ing this par­tic­u­lar failure mode in a sin­gle post.

I used to think a lot in terms of Pri­soner’s Dilemma, and “Co­op­er­ate”/​”Defect.” I’d see prob­lems that could eas­ily be solved if ev­ery­one just put a bit of effort in, which would benefit ev­ery­one. And peo­ple didn’t put the effort in, and this felt like a frus­trat­ing, ob­vi­ous co­or­di­na­tion failure. Why do peo­ple defect so much?

Even­tu­ally Dun­can shifted to­wards us­ing Stag Hunt rather than Pri­soner’s Dilemma as the model here. If you haven’t read it be­fore, it’s worth read­ing the de­scrip­tion in full. If you’re fa­mil­iar you can skip to my cur­rent thoughts be­low.

[note: I changed the word ‘util­ity’ in the quoted sec­tion to ‘re­source’ which I think is more tech­ni­cally ac­cu­rate]

My new fa­vorite tool for mod­el­ing this is stag hunts, which are similar to pris­oner’s dilem­mas in that they con­tain two or more peo­ple each in­de­pen­dently mak­ing de­ci­sions which af­fect the group. In a stag hunt:
—Imag­ine a hunt­ing party ven­tur­ing out into the wilder­ness.
— Each player may choose stag or rab­bit, rep­re­sent­ing the type of game they will try to bring down.
— All game will be shared within the group (usu­ally evenly, though things get more com­plex when you start adding in real-world ar­gu­ments over who de­serves what).
— Bring­ing down a stag is costly and effort­ful, and re­quires co­or­di­na­tion, but has a large pay­off. Let’s say it costs each player 5 points of re­source (time, en­ergy, bul­lets, etc.) to par­ti­ci­pate in a stag hunt, but a stag is worth 50 re­source (in the form of food, leather, etc.) if you catch one.
— Bring­ing down rab­bits is low-cost and low-effort and can be done unilat­er­ally. Let’s say it only costs each player 1 point of re­source to hunt rab­bit, and you get 3 re­source as a re­sult.
— If any player un­ex­pect­edly chooses rab­bit while oth­ers choose stag, the stag es­capes through the hole in the for­ma­tion and is not caught. Thus, if five play­ers all choose stag, they lose 25 re­source and gain 50 re­source, for a net gain of 25 (or +5 apiece). But if four play­ers choose stag and one chooses rab­bit, they lose 21 util­ity and gain only 3.
This cre­ates a strong pres­sure to­ward hav­ing the Schel­ling choice be rab­bit. It’s saner and safer (spend 5, gain 15, net gain of 10 or +2 apiece), es­pe­cially if you have any doubt about the other hunters’ abil­ity to stick to the plan, or the other hunters’ faith in the other hunters, or in the other hunters’ cur­rent re­sources and abil­ity to even take a hit of 5 re­source, or in whether or not the for­est con­tains a stag at all.
Let’s work through a spe­cific ex­am­ple. Imag­ine that the hunt­ing party con­tains the fol­low­ing five peo­ple:
Alexis (cur­rently has 15 re­source “in the bank”)
Blake (cur­rently has 12)
Cameron (9)
Dal­las (6)
Elliott (5)
If ev­ery­one suc­cess­fully co­or­di­nates to choose stag, then the end re­sult will be pos­i­tive for ev­ery­one. The stag costs ev­ery­one 5 re­source to bring down, and then its 50 re­source is di­vided evenly so that ev­ery­one gets 10, for a net gain of 5. The ar­ray [15, 12, 9, 6, 5] has bumped up to [20, 17, 14, 11, 10].
If ev­ery­one chooses rab­bit, the end re­sult is also pos­i­tive, though less ex­cit­ingly so. Rab­bits cost 1 to hunt and provide 3 when caught, so the party will end up at [17, 14, 11, 8, 7].
But imag­ine the situ­a­tion where a stag hunt is at­tempted, but un­suc­cess­ful. Let’s say that Blake quietly de­cides to hunt rab­bit while ev­ery­one else chooses stag. What hap­pens?
Alexis, Cameron, Dal­las, and Elliott each lose 5 re­source while Blake loses 1. The rab­bit that Blake catches is di­vided five ways, for a to­tal of 0.6 re­source apiece. Now our ar­ray looks like [10.6, 11.6, 4.6, 1.6, 0.6].
(Re­mem­ber, Blake only spent 1 re­source in the first place.)
If you’re Elliott, this is a su­per scary re­sult to imag­ine. You no longer have enough re­sources in the bank to be self-sus­tain­ing—you can’t even go out on an­other rab­bit hunt, at this point.
And so, if you’re Elliott, it’s tempt­ing to pre­emp­tively choose rab­bit your­self. If there’s even a chance that the other play­ers might defect on the over­all stag hunt (be­cause they’re tired, or lazy, or what­ever) or worse, if there might not even be a stag out there in the woods to­day, then you have a strong mo­ti­va­tion to self-pro­tec­tively hus­band your re­sources. Even if it turns out that you were wrong about the oth­ers, and you end up be­ing the only one who chose rab­bit, you still end up in a much less dan­ger­ous spot: [10.6, 7.6, 4.6, 1.6, 4.6].
Now imag­ine that you’re Dal­las, think­ing through each of these sce­nar­ios. In both cases, you end up pretty screwed, with your to­tal re­source re­serves at 1.6. At that point, you’ve got to drop out of any fu­ture stag hunts, and all you can do is hunt rab­bit for a while un­til you’ve built up your re­sources again.
So as Dal­las, you’re re­luc­tant to listen to any en­thu­si­as­tic plan to choose stag. You’ve got enough re­sources to ab­sorb one failure, and so you don’t want to do a stag hunt un­til you’re re­ally darn sure that there’s a stag out there, and that ev­ery­body’s re­ally ac­tu­ally for real go­ing to work to­gether and try their hard­est. You’re not op­posed to hunt­ing stag, you’re just op­posed to wild op­ti­mism and wan­ton, frivolous burn­ing of re­sources.
Mean­while, if you’re Alexis or Blake, you’re start­ing to feel pretty frus­trated. I mean, why bother com­ing out to a stag hunt if you’re not even ac­tu­ally will­ing to put in the effort to hunt stag? Can’t these peo­ple see that we’re all bet­ter off if we pitch in hard, to­gether? Why are Dal­las and Elliott pre­emp­tively talk­ing about rab­bits when we haven’t even tried catch­ing a stag yet?
I’ve re­cently been us­ing the terms White Knight and Black Knight to re­fer, not to spe­cific peo­ple like Alexis and Elliott, but to the roles that those peo­ple play in situ­a­tions re­quiring this kind of co­or­di­na­tion. White Knight and Black Knight are hats that peo­ple put on or take off, de­pend­ing on cir­cum­stances.
The White Knight is a char­ac­ter who has looked at what’s go­ing on, built a model of the situ­a­tion, de­cided that they un­der­stand the Rules, and be­gun to take con­fi­dent ac­tion in ac­cor­dance with those Rules. In par­tic­u­lar, the White Knight has de­cided that the time to choose stag is ob­vi­ous, and is already com­mon knowl­edge/​has the Schel­ling na­ture. I mean, just look at the num­bers, right?
The White Knight is of­ten wrong, be­cause re­al­ity is more com­plex than the model even if the model is a good model. Fur­ther­more, other peo­ple of­ten don’t no­tice that the White Knight is as­sum­ing that ev­ery­one knows that it’s time to choose stag—com­mu­ni­ca­tion is hard, and the dou­ble illu­sion of trans­parency is a hell of a drug, and some­one can say words like “All right, let’s all get out there and do our best” and differ­ent peo­ple in the room can draw very differ­ent con­clu­sions about what that means.
So the White Knight burns re­sources over and over again, and feels defected on ev­ery time some­one “wrong­head­edly” chooses rab­bit, and mean­while the other play­ers feel un­fairly judged and found want­ing ac­cord­ing to a stan­dard that they never ex­plic­itly agreed to (re­mem­ber, choos­ing rab­bit should be the Schel­ling op­tion, ac­cord­ing to me), and the whole thing is very rough for ev­ery­one.
If this pro­cess goes on long enough, the White Knight may burn out and be­come the Black Knight. The Black Knight is a more mer­ce­nary char­ac­ter—it has limited re­sources, so it has to watch out for it­self, and it’s only al­lied with the group to the ex­tent that the group’s goals match up with its own. It’s ca­pa­ble of team­work and co­or­di­na­tion, but it’s not zeal­ous. It isn’t blinded by op­ti­mism or pa­tri­o­tism; it’s there to en­gage in mu­tu­ally benefi­cial trade, while tak­ing into ac­count the re­al­ities of un­cer­tainty and un­re­li­a­bil­ity and mis­com­mu­ni­ca­tion.
The Black Knight doesn’t like this whole frame in which do­ing the safe and con­ser­va­tive thing is judged as “defec­tion.” It wants to know who this White Knight thinks he is, that he can just de­clare that it’s time to choose stag, with­out dis­cus­sion or con­sid­er­a­tion of cost. If any­one’s defect­ing, it’s the White Knight, by go­ing around get­ting mad at peo­ple for fol­low­ing lo­cal in­cen­tive gra­di­ents and do­ing the pre­dictable thing.
But the Black Knight is also wrong, in that some­times you re­ally do have to be all-in for the thing to work. You can’t always sit back and choose the safe, calcu­lated op­tion—there are, some­times, gains that can only be got­ten if you have no exit strat­egy and leave ev­ery­thing you’ve got on the field.
I don’t have a solu­tion for this par­tic­u­lar dy­namic, ex­cept for a gen­eral sense that shin­ing more light on it (dig­nify­ing both sides, im­prov­ing com­mu­ni­ca­tion, be­ing will­ing to be ex­plicit, mak­ing it safe for both sides to be ex­plicit) will prob­a­bly help. I think that a “tech­nique” which ze­roes in on en­sur­ing shared com­mon-knowl­edge un­der­stand­ing of “this is what’s good in our sub­cul­ture, this is what’s bad, this is when we need to fully com­mit, this is when we can do the min­i­mum” is a promis­ing can­di­date for de­fus­ing the whole cy­cle of mu­tual ac­cu­sa­tion and defen­sive­ness.
(Cir­cling with a cap­i­tal “C” seems to be use­ful for com­ing at this prob­lem side­ways, whereas mis­sion state­ments and man­i­festos and com­pany hand­books seem to be par­tially-suc­cess­ful-but-high-cost meth­ods of solv­ing it di­rectly.)

The key con­cep­tual differ­ence that I find helpful here is ac­knowl­edg­ing that “Rab­bit” /​ “Stag” are both pos­i­tive choices, that bring about util­ity. “Defect” feels like it brings in con­no­ta­tions that aren’t always ac­cu­rate.

Say­ing that you’re go­ing to pay rent on time, and then not, is defect­ing.

But if some­one shows up say­ing “hey let’s all do Big Pro­ject X” and you’re not that en­thu­si­as­tic about Big Pro­ject X but you sort of nod non­com­mit­tally, and then it turns out they thought you were go­ing to put 10 hours of work into it and you thought you were go­ing to put in 1, and then they get mad at you… I think it’s more use­ful to think of this as “choos­ing rab­bit” than “defect­ing.”

Like­wise, it’s “rab­bit” if you say “nah, I just don’t think Big Pro­ject X is im­por­tant”. Go­ing about your own pro­jects and not sign­ing up for ev­ery per­son’s cru­sade is a perfectly valid ac­tion.

Like­wise, it’s “rab­bit” if you say “look, I re­al­ize we’re in a bad equil­ibrium right now and it’d be bet­ter if we all switched to A New Norm. But right now the Norm is X, and un­less you are ac­tu­ally sure that we have enough buy-in for The New Norm, I’m not go­ing to start do­ing a costly thing that I don’t think is even go­ing to work.”


A lightweight, but con­crete example

At my office, we have Philos­o­phy Fri­days, where we try to get sync about im­por­tant un­der­ly­ing philo­soph­i­cal and strate­gic con­cepts. What is our or­ga­ni­za­tion for? How does it con­nect to the big pic­ture? What in­di­vi­d­ual choices about par­tic­u­lar site-fea­tures are go­ing to bear on that big pic­ture?

We gen­er­ally agree that Philos­o­phy Fri­day is im­por­tant. But of­ten, we seem to dis­agree a lot about the right way to go about it.

In a re­cent ex­am­ple: it of­ten felt to me that our con­ver­sa­tions were sort of me­an­der­ing and in­effi­cient. Me­an­der­ing con­ver­sa­tions that don’t go any­where is a stereo­typ­i­cal ra­tio­nal­ist failure mode. I do it a lot by de­fault my­self. I wish that peo­ple would pun­ish me when I’m steer­ing into ‘me­an­der­ing mode’.

So at some point I said ‘hey this seems kinda me­an­der­ing.’

And it kinda me­an­dered a bit more.

And I said, in a move de­signed to be some­what so­cially pun­ish­ing: “I don’t re­ally trust the con­ver­sa­tion to go any­where use­ful.” And then I took out my lap­top and mostly stopped pay­ing at­ten­tion.

And some­one else on the team re­sponded, even­tu­ally, with some­thing like “I don’t know how to fix the situ­a­tion be­cause you checked out a few min­utes ago and I felt pun­ished and wanted to re­spond but then you didn’t give me space to.”

“Hmm,” I said. I don’t re­mem­ber ex­actly what hap­pened next, but even­tu­ally he ex­plained:

Me­an­der­ing con­ver­sa­tions were im­por­tant to him, be­cause it gave him space to ac­tu­ally think. I pointed to ex­am­ples of meet­ings that I thought had gone well, that ended with google docs full of what I thought had been use­ful ideas and de­vel­op­ments. And he said “those all seemed like ex­am­ples of mediocre meet­ings to me – we had a lot of ideas, sure. But I didn’t feel like I ac­tu­ally got to come to a real de­ci­sion about any­thing im­por­tant.”

“Me­an­der­ing” qual­ity al­lowed a con­ver­sa­tion to ex­plore sub­tle nu­ances of things, to fully ex­plore how a bunch of ideas would in­ter­sect. And this was nec­es­sary to even­tu­ally reach a firm con­clu­sion, to leave be­hind the nig­gling doubts of “is this *re­ally* the right path for the or­ga­ni­za­tion?” so that he could firmly com­mit to a longterm strat­egy.

We still de­bate the right way to con­duct Philos­o­phy Fri­day at the office. But now we have a slightly bet­ter frame for that de­bate, and aware­ness of the trade­offs in­volved. We dis­cuss ways to get the good el­e­ments of the “me­an­der­ing” qual­ity while still mak­ing sure to end with clear next-ac­tions. And we dis­cuss al­ter­nate modes of con­ver­sa­tion we can in­tel­li­gently shift be­tween.

There’s a time when I would have pre-emp­tively got­ten re­ally frus­trated, and started ra­tio­nal­iz­ing rea­sons why my team­mate was willfully pur­su­ing a bad con­ver­sa­tional norm. For­tu­nately I had thought enough about this sort of prob­lem that I no­ticed that I was failing into a failure mode, and shifted mind­sets.

Rab­bit in this case was “ev­ery­one just sort of pur­sues what­ever con­ver­sa­tional types seem best to them in an un­co­or­di­nated fash­ion”, and Stag is “we de­liber­ately choose and en­force par­tic­u­lar con­ver­sa­tional norms.”

We haven’t yet co­or­di­nated enough to re­ally have a “stag” op­tion we can co­or­di­nate around. But I ex­pect that the con­ver­sa­tional norms we even­tu­ally set­tle into will be bet­ter than if we had naively en­forced ei­ther my or my team­mate’s preferred norms.


Takeaways

There seem like a cou­ple im­por­tant take­aways here, to me.

One is that, yes:

Some­times stag hunts are worth it.

I’d like peo­ple in my so­cial net­work to be aware that some­times, it’s re­ally im­por­tant for ev­ery­one to adopt a new norm, or for ev­ery­one to throw them­selves 100% into some­thing, or for a whole lot of per­son-hours to get thrown into a pro­ject.

When dis­cussing whether to em­bark on a stag hunt, it’s use­ful to have short­hand to com­mu­ni­cate why you might ever want to put a lot of effort into a con­certed, co­or­di­nated effort. And then you can dis­cuss the trade­offs se­ri­ously.

I have more to say about what sort of stag hunts seem do-able. But for this post I want to fo­cus pri­mar­ily on the fact that...

The schel­ling op­tion is Rabbit

Some com­mu­ni­ties have es­tab­lished par­tic­u­lar norms fa­vor­ing ‘stag’. But in mod­ern, atomic, Western so­ciety you should prob­a­bly not as­sume this as a de­fault. If you want peo­ple to choose stag, you need to spend spe­cial effort build­ing com­mon knowl­edge that Big Pro­ject X mat­ters, and is worth­while to pur­sue, and get ev­ery­one on board with it.

Corol­lary: Creat­ing com­mon knowl­edge is hard. If you haven’t put in that work, you should as­sume Big Pro­ject X is go­ing to fail, and/​or that it will re­quire a few peo­ple putting in her­culean effort “above their fair share”, which may not be sus­tain­able for them.

This de­pends on whether effort is fun­gible. If you need 100 units of effort, you can make do with one per­son putting in 100 units of effort. If you need ev­ery­one to adopt a new norm that they haven’t bought into, it just won’t work.

If you are propos­ing what seems (to you) quite sen­si­ble, but no­body seems to agree...

...well, maybe peo­ple are be­ing bi­ased in some way, or mo­ti­vated to avoid con­sid­er­ing your pro­posed stag-hunt. Peo­ple sure do seem bi­ased about things, in gen­eral, even when they know about bi­ases. So this may well be part of the is­sue.

But I think it’s quite likely that you’re dra­mat­i­cally un­der­es­ti­mat­ing the in­fer­en­tial dis­tance – both the dis­tance be­tween their out­look and “why your pro­posed ac­tion is good”, as well as the dis­tance be­tween your out­look and “why their cur­rent frame is weigh­ing trade­offs very differ­ently than your cur­rent frame.”

Much of the time, I feel like get­ting an­gry and frus­trated… is some­thing like “wasted mo­tion” or “the wrong step in the dance.”

Not en­tirely – anger and frus­tra­tion are use­ful mo­ti­va­tors. They help me no­tice that some­thing about the sta­tus quo is wrong and needs fix­ing. But I think the spe­cific fla­vor of frus­tra­tion that stems from “peo­ple should be co­op­er­at­ing but aren’t” is of­ten, in some sense, ac­tu­ally wrong about re­al­ity. Peo­ple are ac­tu­ally mak­ing rea­son­able de­ci­sions given the cur­rent land­scape.

Anger and frus­tra­tion help drive me to ac­tion, but of­ten they come with a sort of tun­nel vi­sion. They lead me to dig in my heels, and get ready to fight – at a mo­ment when what I re­ally need is em­pa­thy and cu­ri­os­ity. I ei­ther need to figure out how to com­mu­ni­cate bet­ter, to help some­one un­der­stand why my plan is good. Or, I need to learn what trade­offs I’m miss­ing, which they can see more clearly than I.

My own strate­gies right now

In gen­eral, choose Rab­bit.

  • Keep at around 30% slack in re­serve (such that I can ab­sorb not one, not two, but three ma­jor sur­prise costs with­out start­ing to burn out). Don’t spend en­ergy helping oth­ers if I’ve dipped be­low 30% for long – fo­cus on mak­ing sure my own needs are met.

  • Find lo­cal im­prove­ments I can make that don’t re­quire much co­or­di­na­tion from oth­ers.

Fol­low rab­bit trails into Stag Country

Given a choice, seek out “Rab­bit” ac­tions that prefer­en­tially build op­tion value for im­proved co­or­di­na­tion later on.

  • Me­taphor­i­cally, this means “Fol­low rab­bit trails that lead into Stag-and-Rab­bit Coun­try”, where I’ll have op­por­tu­ni­ties to say:

    • “Hey guys I see a stag! Are we all 100% up for hunt­ing it?” and then maybe it so hap­pens we can stag hunt to­gether.

    • Or, I can some­times say, at small-but-man­age­able-cost-to-my­self “hey guys, I see a whole bunch of rab­bits over there, you could hunt them if you want.” And oth­ers can some­times do the same for me.

  • Sliiightly more con­cretely, this means:

    • Given the op­por­tu­nity, with­out re­quiring ac­tions on the part of other peo­ple… pur­sue ac­tions that demon­strate my trust­wor­thi­ness, and which build bits of in­fras­truc­ture that’ll make it eas­ier to work to­gether in the fu­ture.

    • Help peo­ple out if I can do so with­out dip­ping be­low 30% slack for too long, es­pe­cially if I ex­pect it to in­crease the over­all slack in the sys­tem.

(I’ll hope­fully have more to say about this in the fu­ture.)

Get cu­ri­ous about other peo­ple’s frames

If a per­son and I have ar­gued through the same set of points mul­ti­ple times, each time ex­pect­ing our points to be a solid knock­down of the other’s ar­gu­ment… and if no­body has changed their mind...

Prob­a­bly we are op­er­at­ing in two differ­ent frames. Com­mu­ni­cat­ing across frames is very hard, and be­yond scope of this of this post to teach. But cul­ti­vat­ing cu­ri­os­ity and em­pa­thy are good first steps.

Oc­ca­sion­ally run “Kick­starters for Stag Hunts.” If peo­ple com­mit, hunt stag.

For ex­am­ple, the call-to-ac­tion in my Re­la­tion­ship Between the Village and Mis­sion post (where I asked peo­ple to con­tact me if they were se­ri­ous about im­prov­ing the Village) was de­signed to give me in­for­ma­tion about whether it’s pos­si­ble to co­or­di­nate on a staghunt to im­prove the Berkeley ra­tio­nal­ity village.