Insights from ‘The Strategy of Conflict’

Cross-posted from my blog.

I re­cently read Thomas Schel­ling’s book ‘The Strat­egy of Con­flict’. Many of the ideas it con­tains are now pretty widely known, es­pe­cially in the ra­tio­nal­ist com­mu­nity, such as the value of Schel­ling points when co­or­di­na­tion must be ob­tained with­out com­mu­ni­ca­tion, or the value of be­ing able to com­mit one­self to ac­tions that seem ir­ra­tional. How­ever, there are a few ideas that I got from the book that I don’t think are as em­bed­ded in the pub­lic con­scious­ness.

Schel­ling points in bargaining

The first such idea is the value of Schel­ling points in bar­gain­ing situ­a­tions where com­mu­ni­ca­tion is pos­si­ble, as op­posed to co­or­di­na­tion situ­a­tions where it is not. For in­stance, if you and I were di­vid­ing up a ho­mo­ge­neous pie that we both wanted as much of as pos­si­ble, it would be strange if I told you that I de­manded at least 52.3% of the pie. If I did, you would prob­a­bly ex­pect me to give some ar­gu­ment for the num­ber 52.3% that dis­t­in­guishes it from 51% or 55%. In­deed, it would be more strange than ask­ing for 66.67%, which it­self would be more strange than ask­ing for 50%, which would be the most likely out­come were we to re­ally run the ex­per­i­ment. Schel­ling uses as an example

the re­mark­able fre­quency with which long ne­go­ti­a­tions over com­pli­cated quan­ti­ta­tive for­mu­las or ad hoc shares in some costs or benefits con­verge ul­ti­mately on some­thing as crudely sim­ple as equal shares, shares pro­por­tionate to some com­mon mag­ni­tude (gross na­tional product, pop­u­la­tion, for­eign-ex­change deficit, and so forth), or the shares agreed on in some pre­vi­ous but log­i­cally ir­rele­vant ne­go­ti­a­tion.

The ex­pla­na­tion is ba­si­cally that in bar­gain­ing situ­a­tions like these, any agree­ment could be made bet­ter for ei­ther side, but it can’t be made bet­ter for both si­mul­ta­neously, and any agree­ment is bet­ter than no agree­ment. Talk is cheap, so it’s difficult for any side to cred­ibly com­mit to only ac­cept cer­tain ar­bi­trary out­comes. There­fore, as Schel­ling puts it,

Each party’s strat­egy is guided mainly by what he ex­pects the other to ac­cept or in­sist on; yet each knows that the other is guided by re­cip­ro­cal thoughts. The fi­nal out­come must be a point from which nei­ther ex­pects the other to re­treat; yet the main in­gre­di­ent of this ex­pec­ta­tion is what one thinks the other ex­pects the first to ex­pect, and so on. Some­how, out of this fluid and in­de­ter­mi­nate situ­a­tion that seem­ingly pro­vides no log­i­cal rea­son for any­body to ex­pect any­thing ex­cept what he ex­pects to be ex­pected to ex­pect, a de­ci­sion is reached. Th­ese in­finitely re­flex­ive ex­pec­ta­tions must some­how con­verge upon a sin­gle point, at which each ex­pects the other not to ex­pect to be ex­pected to re­treat.

In other words, a Schel­ling point is a ‘nat­u­ral’ out­come that some­how has the in­trin­sic prop­erty that each party can be ex­pected to de­mand that they do at least as well as they would in that out­come.

Another way of putting this is that once we are bar­gained down to a Schel­ling point, we are not ex­pected to let our­selves be bar­gained down fur­ther. Schel­ling uses the ex­am­ples of sol­diers fight­ing over a city. If one side re­treats 13 km, they might be ex­pected to re­treat even fur­ther, un­less they re­treat to the sin­gle river run­ning through the city. This river can serve as a Schel­ling point, and the at­tack­ing force might gen­uinely ex­pect that their op­po­nents will re­treat no fur­ther.

Threats and promises

A sec­ond in­ter­est­ing idea con­tained in the book is the dis­tinc­tion be­tween threats and promises. On some level, they’re quite similar bar­gain­ing moves: in both cases, I make my be­havi­our de­pen­dent on yours by promis­ing to some­times do things that aren’t nar­rowly ra­tio­nal, so that be­hav­ing in the way I want you to be­comes prof­itable for you. When I threaten you, I say that if you don’t do what I want, I’ll force you to in­cur a cost even at a cost to my­self, per­haps by beat­ing you up, ru­in­ing your rep­u­ta­tion, or re­fus­ing to trade with you. The pur­pose is to en­sure that do­ing what I want be­comes more prof­itable for you, tak­ing my threat into ac­count. When I make a promise, I say that if you do do what I want, I’ll make your life bet­ter, again per­haps at a cost to my­self, per­haps by giv­ing you money, recom­mend­ing that oth­ers hire you, or ab­stain­ing from be­havi­our that you dis­like. Again, the pur­pose is to en­sure that do­ing what I want, once you take my promise into ac­count, is bet­ter for you than other op­tions.

There is an im­por­tant strate­gic differ­ence be­tween threats and promises, how­ever. If a threat is suc­cess­ful, then it is not car­ried out. Con­versely, the point of promises is to in­duce be­havi­our that forces you to carry out the promise. This means that in the ideal case, threat-mak­ing is cheap for the threat­ener, but promise-mak­ing is ex­pen­sive for the promiser.

This differ­ence has im­pli­ca­tions for one’s abil­ity to con­vince one’s bar­gain­ing part­ner that one will carry out your threat or promise. If you and I make five bar­gains in a row, and in the first four situ­a­tions I made a promise that I sub­se­quently kept, then you have some rea­son for con­fi­dence that I will keep my fifth promise. How­ever, if I make four threats in a row, all of which suc­cess­fully de­ter you from en­gag­ing in be­havi­our that I don’t want, then the fifth time I threaten you, you have no more ev­i­dence that I will carry out the threat than you did ini­tially. There­fore, build­ing a rep­u­ta­tion as some­body who car­ries out their threats is some­what more difficult than build­ing a rep­u­ta­tion for keep­ing promises. I must ei­ther oc­ca­sion­ally make threats that fail to de­ter my bar­gain­ing part­ner, thus in­cur­ring both the cost of my part­ner not be­hav­ing in the way I pre­fer and also the cost of car­ry­ing out the threat, or visi­bly make in­vest­ments that will make it cheap for me to carry out threats when nec­es­sary, such as hiring goons or be­ing quick-wit­ted and good at gos­sip­ping.

Mu­tu­ally As­sured Destruction

The fi­nal cluster of ideas con­tained in the book that I will talk about are im­pli­ca­tions of the model of mu­tu­ally as­sured de­struc­tion (MAD). In a MAD dy­namic, two par­ties both have the abil­ity, and to some ex­tent the in­cli­na­tion, to de­stroy the other party, per­haps by ex­plod­ing a large num­ber of nu­clear bombs near them. How­ever, they do not have the abil­ity to de­stroy the other party im­me­di­ately: when one party launches their nu­clear bombs, the other has some amount of time to launch a sec­ond strike, send­ing nu­clear bombs to the first party, be­fore the first party’s bombs land and an­nihilate the sec­ond party. Since both par­ties care about not be­ing de­stroyed more than they care about de­stroy­ing the other party, and both par­ties know this, they each adopt a strat­egy where they com­mit to launch­ing a sec­ond strike in re­sponse to a first strike, and there­fore no first strike is ever launched.

Com­pare the MAD dy­namic to the case of two gun­slingers in the wild west in a stand­off. Each gun­slinger knows that if she does not shoot first, she will likely die be­fore be­ing able to shoot back. There­fore, as soon as you think that the other is about to shoot, or that the other thinks that you are about to shoot, or that the other thinks that you think that the other is about to shoot, et cetera, you need to shoot or the other will. As a re­sult, the gun­slinger dy­namic is an un­sta­ble one that is likely to re­sult in blood­shed. In con­trast, the MAD dy­namic is char­ac­ter­ised by peace­ful­ness and sta­bil­ity, since each one knows that the other will not launch a first strike for fear of a sec­ond strike.

In the fi­nal few chap­ters of the book, Schel­ling dis­cusses what has to hap­pen in or­der to en­sure that MAD re­mains sta­ble. One im­pli­ca­tion of the model that is per­haps coun­ter­in­tu­itive is that if you and I are in a MAD dy­namic, it is vi­tally im­por­tant to me that you know that you have sec­ond-strike ca­pa­bil­ity, and that you know that I know that you know that you have it. If you don’t have sec­ond-strike ca­pa­bil­ity, then you will re­al­ise that I have the abil­ity to launch a first strike. Fur­ther­more, if you think that I know that you know that you don’t have sec­ond-strike ca­pa­bil­ity, then you’ll think that I’ll be tempted to launch a first strike my­self (since per­haps my favourite out­come is one where you’re de­stroyed). In this case, you’d rather launch a first strike be­fore I do, since you an­ti­ci­pate be­ing de­stroyed ei­ther way. There­fore, I have an in­cen­tive to help you in­vest in tech­nol­ogy that will help you ac­cu­rately per­ceive whether or not I am strik­ing, as well as tech­nol­ogy that will hide your weapons (like bal­lis­tic mis­sile sub­marines) so that I can­not de­stroy them with a first strike.

A sec­ond im­pli­ca­tion of the MAD model is that it is much more sta­ble if both sides have more nu­clear weapons. Sup­pose that I need 100 nu­clear weapons to de­stroy my en­emy, and he is think­ing of us­ing his nu­clear weapons to wipe out mine (since per­haps mine are not hid­den), al­low­ing him to launch a first strike. Schel­ling writes:

For illus­tra­tion sup­pose his ac­cu­ra­cies and abil­ities are such that one of his mis­siles has a 50-50 chance of knock­ing out one of ours. Then, if we have 200, he needs to knock out just over half; at 50 per­cent re­li­a­bil­ity he needs to fire just over 200 to cut our resi­d­ual sup­ply to less than 100. If we had 400, he would need to knock out three-quar­ters of ours; at a 50 per­cent dis­count rate for misses and failures he would need to fire more than twice 400, that is, more than 800. If we had 800, he would have to knock out seven-eighths of ours, and to do it with 50 per­cent re­li­a­bil­ity he would need over three times that num­ber, or more than 2400. And so on. The larger the ini­tial num­ber on the “defend­ing” side, the larger the mul­ti­ple re­quired by the at­tacker in or­der to re­duce the vic­tim’s resi­d­ual sup­ply to be­low some “safe” num­ber.

Con­se­quently, if both sides have many times more nu­clear weapons than are needed to de­stroy the en­tire world, the situ­a­tion is much more sta­ble than if they had barely enough to de­stroy the en­emy: each is com­forted in their sec­ond strike ca­pa­bil­ities, and doesn’t need to re­spond as ag­gres­sively to arms buildups by the other party.

It is im­por­tant to note that this con­clu­sion is only valid in a ‘clas­sic’ sim­plified MAD dy­namic. If for each nu­clear weapon that you own, there is some pos­si­bil­ity that a rogue ac­tor will steal the weapon and use it for their own ends the value of large arms buildups be­comes much less clear.

The fi­nal con­clu­sion I’d like to draw from this model is that it would be prefer­able to not have weapons that could de­stroy other weapons. For in­stance, sup­pose that both par­ties were coun­tries that had biolog­i­cal weapons that when re­leased in­fected a large pro­por­tion of the other coun­try, caused them ob­vi­ous symp­toms, and then kil­led them a week later, leav­ing a few days be­tween the on­set of symp­toms and los­ing the abil­ity to effec­tively do things. In such a situ­a­tion, you would know that if I struck first, you would have am­ple abil­ity to get still-func­tion­ing peo­ple to your weapons cen­tres and launch a sec­ond strike, re­gard­less of your abil­ity to de­tect the biolog­i­cal weapon be­fore it ar­rives, or the num­ber of weapons and weapons cen­tres that you or I have. There­fore, you are not tempted to launch first. Since this rea­son­ing holds re­gard­less of what type of weapon you have, it is always bet­ter for me to have this type of biolog­i­cal weapon in a MAD dy­namic, rather than any nu­clear weapons that can po­ten­tially de­stroy weapons cen­tres, so as to pre­serve your sec­ond strike ca­pa­bil­ities. I spec­u­la­tively think that this ar­gu­ment should hold for real life biolog­i­cal weapons, since it seems to me that they could be de­struc­tive enough to act as a de­ter­rent, but that au­thor­i­ties could de­tect their spread early enough to send re­main­ing healthy gov­ern­ment offi­cials to launch a sec­ond strike.

No nominations.
No reviews.