Tactical vs. Strategic Cooperation

As I’ve matured, one of the (101-level?) social skills I’ve come to appreciate is asking directly for the narrow, specific thing you want, instead of debating around it.

What do I mean by “debating around” an issue?

Things like:

“If we don’t do what I want, horrible things A, B, and C will happen!”

(This tends to degenerate into a miserable argument over how likely A, B, and C are, or a referendum on how neurotic or pessimistic I am.)

“You’re such an awful person for not having done [thing I want]!”

(This tends to degenerate into a miserable argument about each other’s general worth.)

“Authority Figure Bob will disapprove if we don’t do [thing I want]!”

(This tends to degenerate into a miserable argument about whether we should respect Bob’s authority.)

It’s been astonishing to me how much better people respond if instead I just say, “I really want to do [thing I want.] Can we do that?”

No, it doesn’t guarantee that you’ll get your way, but it makes it a whole lot more likely. More than that, it means that when you do get into negotiation or debate, that debate stays targeted to the actual decision you’re disagreeing about, instead of a global fight about anything and everything, and thus is more likely to be resolved.

Real-life example:

Back at MetaMed, I had a coworker who believed in alternative medicine. I didn’t. This caused a lot of spoken and unspoken conflict. There were global values issues at play: reason vs. emotion, logic vs. social charisma, whether her perspective on life was good or bad. I’m embarrassed to say I was rude and inappropriate. But it was coming from a well-meaning place; I didn’t want any harm to come to patients from misinformation, and I was very frustrated, because I didn’t see how I could prevent that outcome.

Finally, at my wit’s end, I blurted out what I wanted: I wanted to have veto power over any information we sent to patients, to make sure it didn’t contain any factual inaccuracies.

Guess what? She agreed instantly.

This probably should have been obvious (and I’m sure it was obvious to her.) My job was producing the research reports, while her jobs included marketing and operations. The whole point of division of labor is that we can each stick to our own tasks and not have to critique each other’s entire philosophy of life, since it’s not relevant to getting the company’s work done as well as possible. But I was extremely inexperienced at working with people at that time.

It’s not fair to your coworkers to try to alter their private beliefs. (Would you try to change their religion?) A company is an association of people who cooperate on a local task. They don’t have to see eye-to-eye about everything in the world, so long as they can work out their disagreements about the task at hand.

This is a skill that “practical” people have, and “idealistic” and “theoretical” people are often weak at—the ability to declare some issues off topic. We’re trying to decide what to do in the here and now; we don’t always have to turn things into a debate about underlying ethical or epistemological principles. It’s not that principles don’t exist (though some self-identified “pragmatic” or “practical” people are against principles per se, I don’t agree with them.) It’s that it can be unproductive to get into debates about general principles, when it takes up too much time and generates too much ill will, and when it isn’t necessary to come to agreement about the tactical plan of what to do next.

Well, what about longer-term, more intimate partnerships? Maybe in a strictly professional relationship you can avoid talking about politics and religion altogether, but in a closer relationship, like a marriage, you actually want to get alignment on underlying values, worldviews, and principles. My husband and I spend a ton of time talking about the diffs between our opinions, and reconciling them, until we do basically have the same worldview, seen through the lens of two different temperaments. Isn’t that a counterexample to this “just debate the practical issue at hand” thing? Isn’t intellectual discussion really valuable to intellectually intimate people?

Well, it’s complicated. Because I’ve found the same trick of narrowing the scope of the argument and just asking for what I want resolves debates with my husband too.

When I find myself “debating around” a request, it’s often debating in bad faith. I’m not actually trying to find out what the risks of [not what I want] are in real life, I’m trying to use talking about danger as a way to scare him into doing [what I want]. If I’m quoting an expert nutritionist to argue that we should have home-cooked family dinners, my motivation is not actually curiosity about the long-term health dangers of not eating as a family, but simply that I want family dinners and I’m throwing spaghetti at a wall hoping some pro-dinner argument will work on him. The “empirical” or “intellectual” debate is just so much rhetorical window dressing for an underlying request. And when that’s going on, it’s better to notice and redirect to the actual underlying desire.

Then you can get to the actual negotiation, like: what makes family dinners undesirable to you? How could we mitigate those harms? What alternatives would work for both of us?

Debating a far-mode abstraction (like “how do home eating habits affect children’s long-term health?”) is often an inefficient way of debating what’s really a near-mode practical issue only weakly related to the abstraction (like “what kind of schedule should our household have around food?”) The far-mode abstract question still exists and might be worth getting into as well, but it also may recede dramatically in importance once you’ve resolved the practical issue.

One of my long-running (and interesting and mutually respectful) disagreements with my friend Michael Vassar is about the importance of local/​tactical vs. global/​strategic cooperation. Compared to me, he’s much more likely to value getting to alignment with people on fundamental values, epistemology, and world-models. He would rather cooperate with people who share his principles but have opposite positions on object-level, near-term decisions, than people who oppose his principles but are willing to cooperate tactically with him on one-off decisions.

The reasoning for this, he told me, is simply that the long-term is long, and the short-term is short. There’s a lot more value to be gained from someone who keeps actively pursuing goals aligned with yours, even when they’re far away and you haven’t spoken in a long time, than from someone you can persuade or incentivize to do a specific thing you want right now, but who won’t be any help in the long run (or might actually oppose your long-run aims.)

This seems like fine reasoning to me, as far as it goes. I think my point of departure is that I estimate different numbers for probabilities and expected values than him. I expect to get a lot of mileage out of relatively transactional or local cooperation (e.g. donors to my organization who don’t buy into all of my ideals, synagogue members who aren’t intellectually rigorous but are good people to cooperate with on charity, mutual aid, or childcare). I expect getting to alignment on principles to be really hard, expensive, and unlikely to work, most of the time, for me.

Now, I think compared to most people in the world, we’re both pretty far on the “long-term cooperation” side of the spectrum.

It’s pretty standard advice in business books about company culture, for instance, to note that the most successful teams are more likely to have shared idealistic visions and to get along with each other as friends outside of work. Purely transactional, working-for-a-paycheck, arrangements don’t really inspire excellence. You can trust strangers in competitive market systems that effectively penalize fraud, but large areas of life aren’t like that, and you actually have to have pretty broad value-alignment with people to get any benefit from cooperating.

I think we’d both agree that it’s unwise (and immoral, which is kind of the same thing) to try to benefit in the short term from allying with terrible people. The question is, who counts as terrible? What sorts of lapses in rigorous thinking are just normal human fallibility and which make a person seriously untrustworthy?

I’d be interested to read some discussion about when and how much it makes sense to prioritize strategic vs. tactical alliance.