Musings on Double Crux (and “Productive Disagreement”)

Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt.

Double Crux has been making the rounds lately (mostly on Facebook but I hope for this to change). It seems like the technique has failed to take root as well as it should. What’s up with that?

(If you aren’t yet familiar with Double Crux I recommend checking out Duncan’s post on it in full. There’s a lot of nuance that might be missed with a simple description.)

Observations So Far

  • Double Crux hasn’t percolated beyond circles directly adjacent to CFAR (it seems to be learned mostly be word of mouth). This might be evidence that it’s too confusing or nuanced a concept to teach without word of mouth and lots of examples. It might be evidence that we have not yet taught it very well.

  • “Double Crux” seems to refer to two things: the specific action of “finding the crux(es) you both agree the debate hinges on” and “the overall pattern of behavior surrounding using Official Doublecrux Technique”. (I’ll be using the phrase “productive disagreement” to refer to the second, broader usage)

Double Crux seems hard to practice, for a few reasons.

Filtering Effects

  • In local meetups where rationality-folk attempt to practice productive disagreement on purpose, they often have trouble finding things to disagree about. Instead they either:

    • are already filtered to have similar beliefs,

    • quickly realize their beliefs shouldn’t be that strong (i.e. they disagree on Open Borders, but soon as they start talking they admit that neither of them really have that strong an opinion)

    • they have wildly different intuitions about deep moral sentiments that are hard to make headway on in a reasonable amount of time—often untethered to anything empirical. (i.e. what’s more important? Preventing suffering? Material Freedom? Accomplishing interesting things?)

Insufficient Shared Trust

  • Meanwhile in many online spaces, people disagree all the time. And even if they’re both nominally rationalists, they have an (arguably justified) distrust of people on the internet who don’t seem to be arguing in good faith. So there isn’t enough foundation to do a productive disagreement at all.

  • One failure mode of Double Crux is when people disagree on what frame to even be using to evaluate truth, in which case the debate recurses all the way to the level of basic epistemology. It often doesn’t seem to be worth the effort to resolve that.

  • Perhaps most frustratingly: it seems to me that there are many longstanding disagreements between people who should totally be able to communicate clearly, update rationally, and make useful progress together, and those disagreements don’t go away, people just eventually start ignoring each other or leaving the dispute as unresolved. (An example I feel safe bringing up publicly is the argument between Hanson and Yudkowsky, although this may be a case of the ‘what frame are we even using’ issue above.)

That last point is one of the biggest motivators of this post. If the people I most respect can’t productively disagree in a way that leads to clear progress, recognizable from both sides, then what is the rationality community even doing? (Whether you consider the primary goal to “raise the sanity waterline” or “build a small intellectual community that can solve particular hard problems”, this bodes poorly).

Possible Pre-Requisites for Progress

There’s a large number of sub-skills you need to productively disagree. To have public norms surrounding disagreement, you not only need individuals to have those skills—they need to trust that each other have those skills as well.

Here’s a rough list of those skills. (Note: this is long, and it’s less important that you read the whole list than that the list is long, which is why Double Cruxing is hard)

  • Background beliefs (listed in Duncan’s original post)

    • Epistemic humility (“I could be the wrong person here”)

    • Good Faith (“I trust the other person to be believing things that make sense to them, which I’d have ended up believing if I were exposed to the same stimuli, and that they are generally trying to find the the truth”)

    • Confidence in the existence of objective truth

    • Curiosity /​ Desire to uncover truth

  • Building-Block and Meta Skills

  • (Necessary or at least very helpful to learn everything else)

  • Notice you are in a failure mode, and step out. Examples:

    • You are fighting to make sure an side/​argument wins

    • You are fighting to make another side/​argument lose (potentially jumping on something that seems allied to something/​someone you consider bad/​dangerous)

    • You are incentivized to believe something, or not to notice something, because of social or financial rewards,

    • You’re incentivized not to notice something or think it’s important because it’d be physically inconvenient/​annoying

    • You are offended/​angered/​defensive/​agitated

    • You’re afraid you’ll lose something important if you lose a belief (possibly ‘bucket errors’)

    • You’re rounding a person’s statement off to the nearest stereotype instead of trying to actually understand and response to what they’re saying

    • You’re arguing about definitions of words instead of ideas

    • Notice “freudian slip” ish things that hint that you’re thinking about something in an unhelpful way. (for example, while writing this, I typed out “your opponent” to refer to the person you’re Double Cruxing with, which is a holdover from treating it like an adversarial debate)

(The “Step Out” part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)

  • Social Skills (i.e. not feeding into negative spirals, noticing what emotional state or patterns other people are in [*without* accidentaly rounding them off to a stereotype])

    • Ability to tactfully disagree in a way that arouses curiosity rather than defensiveness

    • Leaving your colleague a line of retreat (i.e. not making them lose face if they change their mind)

    • Socially reward people who change their mind (in general, frequently, so that your colleague trusts that you’ll do so for them)

    • Ability to listen (in a way that makes someone feel listened to) so they feel like they got to actually talk, which makes them inclined to listen as well

    • Ability to notice if someone else seems to be in one of the above failure modes (and then, ability to point it out gently)

    • Cultivate empathy and curiosity about other people so the other social skills come more naturally, and so that even if you don’t expect them to be right, you can see them as helpful to at least understand their reasoning (fleshing out your model of how other people might think)

    • Ability to communicate in (and to listen to) a variety of styles of conversation, “code switching”, learning another person’s jargon or explaining yours without getting frustrated

    • Habit asking clarifying questions, that help your partner find the Crux of their beliefs.

  • Actually Thinking About Things

    • Understanding when and how to apply math, statistics, etc

    • Practice thinking causally

    • Practice various creativity related things that help you brainstorm ideas, notice implications of things, etc

    • Operationalize vague beliefs into concrete predictions

  • Actually Changing Your Mind

    • Notice when you are confused or surprised and treat this as a red flag that something about your models is wrong (either you have the wrong model or no model)

    • Ability to identify what the actual Crux of your beliefs are.

    • Ability to track bits of small bits of evidence that are accumulating. If enough bits of evidence have accumulated that you should at least be taking an idea *seriously* (even if not changing your mind yet), go through motions of thinking through what the implications WOULD be, to help future updates happen more easily.

    • If enough evidence has accumulated that you should change your mind about a thing… like, actually do that. See the list of failure modes above that may prevent this. (That said, if you have a vague nagging sense that something isn’t right even if you can’t articulate it, try to focus on that and flesh it out rather than trying to steamroll over it)

    • Explore Implications: When you change your mind on a thing, don’t just acknowledge, actually think about what other concepts in your worldview should change. Do this

      • because it *should* have other implications, and it’s useful to know what they are....

      • because it’ll help you actually retain the update (instead of letting it slide away when it becomes socially/​politically/​emotionally/​physically inconvenient to believe it, or just forgetting)

    • If you notice your emotions are not in line with what you now believe the truth to be (in a system-2 level), figure out why that is.

  • Noticing Disagreement and Confusion, and then putting in the work to resolve it

  • If you have all the above skills, and your partner does too, and you both trust that this is the case, you can still fail to make progress if you don’t actually follow up, and schedule the time to talk through the issues thoroughly. For deep disagreement this can take years. It may or may not be worth it. But if there are longstanding disagreements that continuously cause strife, it may be worthwhile.

Building Towards Shared Norms

When smart, insightful people disagree, at least one of them is doing something wrong, and it seems like we should be trying harder to notice and resolve it.

A rough sketch of a norm I’d like to see.

Trigger: You’ve gotten into a heated dispute where at least one person feels the other is arguing in bad faith (especially in public/​online settings)

Action: Before arguing further:

  • stop to figure out if the argument is even worth it

  • if so, each person runs through some basic checks (i.e. “am *I* being overly tribal/​emotional?)

  • instead of continuing to argue in public where there’s a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.

  • they talk until at least they succeed at Step 1 Double Crux (i.e. agree on where they disagree, and hopefully figure out a possible empirical test for it). Ideally, they also come to as much agreement as they can.

  • Regardless of how far they get, they write up a short post (maybe just a paragraph, maybe longer depending on context) on what they did end up agreeing on or figuring out. (The post should be something they both sign off on)