Tactical vs. Strategic Cooperation

As I’ve ma­tured, one of the (101-level?) so­cial skills I’ve come to ap­pre­ci­ate is ask­ing di­rectly for the nar­row, spe­cific thing you want, in­stead of de­bat­ing around it.

What do I mean by “de­bat­ing around” an is­sue?

Things like:

“If we don’t do what I want, hor­rible things A, B, and C will hap­pen!”

(This tends to de­gen­er­ate into a mis­er­able ar­gu­ment over how likely A, B, and C are, or a refer­en­dum on how neu­rotic or pes­simistic I am.)

“You’re such an awful per­son for not hav­ing done [thing I want]!”

(This tends to de­gen­er­ate into a mis­er­able ar­gu­ment about each other’s gen­eral worth.)

“Author­ity Figure Bob will dis­ap­prove if we don’t do [thing I want]!”

(This tends to de­gen­er­ate into a mis­er­able ar­gu­ment about whether we should re­spect Bob’s au­thor­ity.)

It’s been as­ton­ish­ing to me how much bet­ter peo­ple re­spond if in­stead I just say, “I re­ally want to do [thing I want.] Can we do that?”

No, it doesn’t guaran­tee that you’ll get your way, but it makes it a whole lot more likely. More than that, it means that when you do get into ne­go­ti­a­tion or de­bate, that de­bate stays tar­geted to the ac­tual de­ci­sion you’re dis­agree­ing about, in­stead of a global fight about any­thing and ev­ery­thing, and thus is more likely to be re­solved.

Real-life ex­am­ple:

Back at Me­taMed, I had a coworker who be­lieved in al­ter­na­tive medicine. I didn’t. This caused a lot of spo­ken and un­spo­ken con­flict. There were global val­ues is­sues at play: rea­son vs. emo­tion, logic vs. so­cial charisma, whether her per­spec­tive on life was good or bad. I’m em­bar­rassed to say I was rude and in­ap­pro­pri­ate. But it was com­ing from a well-mean­ing place; I didn’t want any harm to come to pa­tients from mis­in­for­ma­tion, and I was very frus­trated, be­cause I didn’t see how I could pre­vent that out­come.

Fi­nally, at my wit’s end, I blurted out what I wanted: I wanted to have veto power over any in­for­ma­tion we sent to pa­tients, to make sure it didn’t con­tain any fac­tual in­ac­cu­ra­cies.

Guess what? She agreed in­stantly.

This prob­a­bly should have been ob­vi­ous (and I’m sure it was ob­vi­ous to her.) My job was pro­duc­ing the re­search re­ports, while her jobs in­cluded mar­ket­ing and op­er­a­tions. The whole point of di­vi­sion of la­bor is that we can each stick to our own tasks and not have to cri­tique each other’s en­tire philos­o­phy of life, since it’s not rele­vant to get­ting the com­pany’s work done as well as pos­si­ble. But I was ex­tremely in­ex­pe­rienced at work­ing with peo­ple at that time.

It’s not fair to your cowork­ers to try to al­ter their pri­vate be­liefs. (Would you try to change their re­li­gion?) A com­pany is an as­so­ci­a­tion of peo­ple who co­op­er­ate on a lo­cal task. They don’t have to see eye-to-eye about ev­ery­thing in the world, so long as they can work out their dis­agree­ments about the task at hand.

This is a skill that “prac­ti­cal” peo­ple have, and “ideal­is­tic” and “the­o­ret­i­cal” peo­ple are of­ten weak at—the abil­ity to de­clare some is­sues off topic. We’re try­ing to de­cide what to do in the here and now; we don’t always have to turn things into a de­bate about un­der­ly­ing eth­i­cal or episte­molog­i­cal prin­ci­ples. It’s not that prin­ci­ples don’t ex­ist (though some self-iden­ti­fied “prag­matic” or “prac­ti­cal” peo­ple are against prin­ci­ples per se, I don’t agree with them.) It’s that it can be un­pro­duc­tive to get into de­bates about gen­eral prin­ci­ples, when it takes up too much time and gen­er­ates too much ill will, and when it isn’t nec­es­sary to come to agree­ment about the tac­ti­cal plan of what to do next.

Well, what about longer-term, more in­ti­mate part­ner­ships? Maybe in a strictly pro­fes­sional re­la­tion­ship you can avoid talk­ing about poli­tics and re­li­gion al­to­gether, but in a closer re­la­tion­ship, like a mar­riage, you ac­tu­ally want to get al­ign­ment on un­der­ly­ing val­ues, wor­ld­views, and prin­ci­ples. My hus­band and I spend a ton of time talk­ing about the diffs be­tween our opinions, and rec­on­cil­ing them, un­til we do ba­si­cally have the same wor­ld­view, seen through the lens of two differ­ent tem­per­a­ments. Isn’t that a coun­terex­am­ple to this “just de­bate the prac­ti­cal is­sue at hand” thing? Isn’t in­tel­lec­tual dis­cus­sion re­ally valuable to in­tel­lec­tu­ally in­ti­mate peo­ple?

Well, it’s com­pli­cated. Be­cause I’ve found the same trick of nar­row­ing the scope of the ar­gu­ment and just ask­ing for what I want re­solves de­bates with my hus­band too.

When I find my­self “de­bat­ing around” a re­quest, it’s of­ten de­bat­ing in bad faith. I’m not ac­tu­ally try­ing to find out what the risks of [not what I want] are in real life, I’m try­ing to use talk­ing about dan­ger as a way to scare him into do­ing [what I want]. If I’m quot­ing an ex­pert nu­tri­tion­ist to ar­gue that we should have home-cooked fam­ily din­ners, my mo­ti­va­tion is not ac­tu­ally cu­ri­os­ity about the long-term health dan­gers of not eat­ing as a fam­ily, but sim­ply that I want fam­ily din­ners and I’m throw­ing spaghetti at a wall hop­ing some pro-din­ner ar­gu­ment will work on him. The “em­piri­cal” or “in­tel­lec­tual” de­bate is just so much rhetor­i­cal win­dow dress­ing for an un­der­ly­ing re­quest. And when that’s go­ing on, it’s bet­ter to no­tice and redi­rect to the ac­tual un­der­ly­ing de­sire.

Then you can get to the ac­tual ne­go­ti­a­tion, like: what makes fam­ily din­ners un­de­sir­able to you? How could we miti­gate those harms? What al­ter­na­tives would work for both of us?

De­bat­ing a far-mode ab­strac­tion (like “how do home eat­ing habits af­fect chil­dren’s long-term health?“) is of­ten an in­effi­cient way of de­bat­ing what’s re­ally a near-mode prac­ti­cal is­sue only weakly re­lated to the ab­strac­tion (like “what kind of sched­ule should our house­hold have around food?“) The far-mode ab­stract ques­tion still ex­ists and might be worth get­ting into as well, but it also may re­cede dra­mat­i­cally in im­por­tance once you’ve re­solved the prac­ti­cal is­sue.

One of my long-run­ning (and in­ter­est­ing and mu­tu­ally re­spect­ful) dis­agree­ments with my friend Michael Vas­sar is about the im­por­tance of lo­cal/​tac­ti­cal vs. global/​strate­gic co­op­er­a­tion. Com­pared to me, he’s much more likely to value get­ting to al­ign­ment with peo­ple on fun­da­men­tal val­ues, episte­mol­ogy, and world-mod­els. He would rather co­op­er­ate with peo­ple who share his prin­ci­ples but have op­po­site po­si­tions on ob­ject-level, near-term de­ci­sions, than peo­ple who op­pose his prin­ci­ples but are will­ing to co­op­er­ate tac­ti­cally with him on one-off de­ci­sions.

The rea­son­ing for this, he told me, is sim­ply that the long-term is long, and the short-term is short. There’s a lot more value to be gained from some­one who keeps ac­tively pur­su­ing goals al­igned with yours, even when they’re far away and you haven’t spo­ken in a long time, than from some­one you can per­suade or in­cen­tivize to do a spe­cific thing you want right now, but who won’t be any help in the long run (or might ac­tu­ally op­pose your long-run aims.)

This seems like fine rea­son­ing to me, as far as it goes. I think my point of de­par­ture is that I es­ti­mate differ­ent num­bers for prob­a­bil­ities and ex­pected val­ues than him. I ex­pect to get a lot of mileage out of rel­a­tively trans­ac­tional or lo­cal co­op­er­a­tion (e.g. donors to my or­ga­ni­za­tion who don’t buy into all of my ideals, syn­a­gogue mem­bers who aren’t in­tel­lec­tu­ally rigor­ous but are good peo­ple to co­op­er­ate with on char­ity, mu­tual aid, or child­care). I ex­pect get­ting to al­ign­ment on prin­ci­ples to be re­ally hard, ex­pen­sive, and un­likely to work, most of the time, for me.

Now, I think com­pared to most peo­ple in the world, we’re both pretty far on the “long-term co­op­er­a­tion” side of the spec­trum.

It’s pretty stan­dard ad­vice in busi­ness books about com­pany cul­ture, for in­stance, to note that the most suc­cess­ful teams are more likely to have shared ideal­is­tic vi­sions and to get along with each other as friends out­side of work. Purely trans­ac­tional, work­ing-for-a-pay­check, ar­range­ments don’t re­ally in­spire ex­cel­lence. You can trust strangers in com­pet­i­tive mar­ket sys­tems that effec­tively pe­nal­ize fraud, but large ar­eas of life aren’t like that, and you ac­tu­ally have to have pretty broad value-al­ign­ment with peo­ple to get any benefit from co­op­er­at­ing.

I think we’d both agree that it’s un­wise (and im­moral, which is kind of the same thing) to try to benefit in the short term from al­ly­ing with ter­rible peo­ple. The ques­tion is, who counts as ter­rible? What sorts of lapses in rigor­ous think­ing are just nor­mal hu­man fal­li­bil­ity and which make a per­son se­ri­ously un­trust­wor­thy?

I’d be in­ter­ested to read some dis­cus­sion about when and how much it makes sense to pri­ori­tize strate­gic vs. tac­ti­cal al­li­ance.