Musings on Double Crux (and “Productive Disagreement”)

Epistemic Sta­tus: Think­ing out loud, not nec­es­sar­ily en­dorsed, more of a brain­storm and hope­fully dis­cus­sion-prompt.

Dou­ble Crux has been mak­ing the rounds lately (mostly on Face­book but I hope for this to change). It seems like the tech­nique has failed to take root as well as it should. What’s up with that?

(If you aren’t yet fa­mil­iar with Dou­ble Crux I recom­mend check­ing out Dun­can’s post on it in full. There’s a lot of nu­ance that might be missed with a sim­ple de­scrip­tion.)

Ob­ser­va­tions So Far

  • Dou­ble Crux hasn’t per­co­lated be­yond cir­cles di­rectly ad­ja­cent to CFAR (it seems to be learned mostly be word of mouth). This might be ev­i­dence that it’s too con­fus­ing or nu­anced a con­cept to teach with­out word of mouth and lots of ex­am­ples. It might be ev­i­dence that we have not yet taught it very well.

  • “Dou­ble Crux” seems to re­fer to two things: the spe­cific ac­tion of “find­ing the crux(es) you both agree the de­bate hinges on” and “the over­all pat­tern of be­hav­ior sur­round­ing us­ing Offi­cial Dou­ble­crux Tech­nique”. (I’ll be us­ing the phrase “pro­duc­tive dis­agree­ment” to re­fer to the sec­ond, broader us­age)

Dou­ble Crux seems hard to prac­tice, for a few rea­sons.

Fil­ter­ing Effects

  • In lo­cal mee­tups where ra­tio­nal­ity-folk at­tempt to prac­tice pro­duc­tive dis­agree­ment on pur­pose, they of­ten have trou­ble find­ing things to dis­agree about. In­stead they ei­ther:

    • are already filtered to have similar be­liefs,

    • quickly re­al­ize their be­liefs shouldn’t be that strong (i.e. they dis­agree on Open Borders, but soon as they start talk­ing they ad­mit that nei­ther of them re­ally have that strong an opinion)

    • they have wildly differ­ent in­tu­itions about deep moral sen­ti­ments that are hard to make head­way on in a rea­son­able amount of time—of­ten un­teth­ered to any­thing em­piri­cal. (i.e. what’s more im­por­tant? Prevent­ing suffer­ing? Ma­te­rial Free­dom? Ac­com­plish­ing in­ter­est­ing things?)

In­suffi­cient Shared Trust

  • Mean­while in many on­line spaces, peo­ple dis­agree all the time. And even if they’re both nom­i­nally ra­tio­nal­ists, they have an (ar­guably jus­tified) dis­trust of peo­ple on the in­ter­net who don’t seem to be ar­gu­ing in good faith. So there isn’t enough foun­da­tion to do a pro­duc­tive dis­agree­ment at all.

  • One failure mode of Dou­ble Crux is when peo­ple dis­agree on what frame to even be us­ing to eval­u­ate truth, in which case the de­bate re­curses all the way to the level of ba­sic episte­mol­ogy. It of­ten doesn’t seem to be worth the effort to re­solve that.

  • Per­haps most frus­trat­ingly: it seems to me that there are many long­stand­ing dis­agree­ments be­tween peo­ple who should to­tally be able to com­mu­ni­cate clearly, up­date ra­tio­nally, and make use­ful progress to­gether, and those dis­agree­ments don’t go away, peo­ple just even­tu­ally start ig­nor­ing each other or leav­ing the dis­pute as un­re­solved. (An ex­am­ple I feel safe bring­ing up pub­li­cly is the ar­gu­ment be­tween Han­son and Yud­kowsky, al­though this may be a case of the ‘what frame are we even us­ing’ is­sue above.)

That last point is one of the biggest mo­ti­va­tors of this post. If the peo­ple I most re­spect can’t pro­duc­tively dis­agree in a way that leads to clear progress, rec­og­niz­able from both sides, then what is the ra­tio­nal­ity com­mu­nity even do­ing? (Whether you con­sider the pri­mary goal to “raise the san­ity wa­ter­line” or “build a small in­tel­lec­tual com­mu­nity that can solve par­tic­u­lar hard prob­lems”, this bodes poorly).

Pos­si­ble Pre-Req­ui­sites for Progress

There’s a large num­ber of sub-skills you need to pro­duc­tively dis­agree. To have pub­lic norms sur­round­ing dis­agree­ment, you not only need in­di­vi­d­u­als to have those skills—they need to trust that each other have those skills as well.

Here’s a rough list of those skills. (Note: this is long, and it’s less im­por­tant that you read the whole list than that the list is long, which is why Dou­ble Crux­ing is hard)

  • Back­ground be­liefs (listed in Dun­can’s origi­nal post)

    • Epistemic hu­mil­ity (“I could be the wrong per­son here”)

    • Good Faith (“I trust the other per­son to be be­liev­ing things that make sense to them, which I’d have ended up be­liev­ing if I were ex­posed to the same stim­uli, and that they are gen­er­ally try­ing to find the the truth”)

    • Con­fi­dence in the ex­is­tence of ob­jec­tive truth

    • Cu­ri­os­ity /​ De­sire to un­cover truth

  • Build­ing-Block and Meta Skills

  • (Ne­c­es­sary or at least very helpful to learn ev­ery­thing else)

  • No­tice you are in a failure mode, and step out. Ex­am­ples:

    • You are fight­ing to make sure an side/​ar­gu­ment wins

    • You are fight­ing to make an­other side/​ar­gu­ment lose (po­ten­tially jump­ing on some­thing that seems al­lied to some­thing/​some­one you con­sider bad/​dan­ger­ous)

    • You are in­cen­tivized to be­lieve some­thing, or not to no­tice some­thing, be­cause of so­cial or fi­nan­cial re­wards,

    • You’re in­cen­tivized not to no­tice some­thing or think it’s im­por­tant be­cause it’d be phys­i­cally in­con­ve­nient/​annoying

    • You are offended/​an­gered/​defen­sive/​agitated

    • You’re afraid you’ll lose some­thing im­por­tant if you lose a be­lief (pos­si­bly ‘bucket er­rors’)

    • You’re round­ing a per­son’s state­ment off to the near­est stereo­type in­stead of try­ing to ac­tu­ally un­der­stand and re­sponse to what they’re say­ing

    • You’re ar­gu­ing about defi­ni­tions of words in­stead of ideas

    • No­tice “freudian slip” ish things that hint that you’re think­ing about some­thing in an un­helpful way. (for ex­am­ple, while writ­ing this, I typed out “your op­po­nent” to re­fer to the per­son you’re Dou­ble Crux­ing with, which is a holdover from treat­ing it like an ad­ver­sar­ial de­bate)

(The “Step Out” part can be pretty hard and would be a long se­ries of blog­posts, but hope­fully this at least gets across the ideas to shoot for)

  • So­cial Skills (i.e. not feed­ing into nega­tive spirals, notic­ing what emo­tional state or pat­terns other peo­ple are in [*with­out* ac­ci­den­taly round­ing them off to a stereo­type])

    • Abil­ity to tact­fully dis­agree in a way that arouses cu­ri­os­ity rather than defensiveness

    • Leav­ing your col­league a line of re­treat (i.e. not mak­ing them lose face if they change their mind)

    • So­cially re­ward peo­ple who change their mind (in gen­eral, fre­quently, so that your col­league trusts that you’ll do so for them)

    • Abil­ity to listen (in a way that makes some­one feel listened to) so they feel like they got to ac­tu­ally talk, which makes them in­clined to listen as well

    • Abil­ity to no­tice if some­one else seems to be in one of the above failure modes (and then, abil­ity to point it out gen­tly)

    • Cul­ti­vate em­pa­thy and cu­ri­os­ity about other peo­ple so the other so­cial skills come more nat­u­rally, and so that even if you don’t ex­pect them to be right, you can see them as helpful to at least un­der­stand their rea­son­ing (flesh­ing out your model of how other peo­ple might think)

    • Abil­ity to com­mu­ni­cate in (and to listen to) a va­ri­ety of styles of con­ver­sa­tion, “code switch­ing”, learn­ing an­other per­son’s jar­gon or ex­plain­ing yours with­out get­ting frustrated

    • Habit ask­ing clar­ify­ing ques­tions, that help your part­ner find the Crux of their be­liefs.

  • Ac­tu­ally Think­ing About Things

    • Un­der­stand­ing when and how to ap­ply math, statis­tics, etc

    • Prac­tice think­ing causally

    • Prac­tice var­i­ous cre­ativity re­lated things that help you brain­storm ideas, no­tice im­pli­ca­tions of things, etc

    • Oper­a­tional­ize vague be­liefs into con­crete predictions

  • Ac­tu­ally Chang­ing Your Mind

    • No­tice when you are con­fused or sur­prised and treat this as a red flag that some­thing about your mod­els is wrong (ei­ther you have the wrong model or no model)

    • Abil­ity to iden­tify what the ac­tual Crux of your be­liefs are.

    • Abil­ity to track bits of small bits of ev­i­dence that are ac­cu­mu­lat­ing. If enough bits of ev­i­dence have ac­cu­mu­lated that you should at least be tak­ing an idea *se­ri­ously* (even if not chang­ing your mind yet), go through mo­tions of think­ing through what the im­pli­ca­tions WOULD be, to help fu­ture up­dates hap­pen more eas­ily.

    • If enough ev­i­dence has ac­cu­mu­lated that you should change your mind about a thing… like, ac­tu­ally do that. See the list of failure modes above that may pre­vent this. (That said, if you have a vague nag­ging sense that some­thing isn’t right even if you can’t ar­tic­u­late it, try to fo­cus on that and flesh it out rather than try­ing to steam­roll over it)

    • Ex­plore Im­pli­ca­tions: When you change your mind on a thing, don’t just ac­knowl­edge, ac­tu­ally think about what other con­cepts in your wor­ld­view should change. Do this

      • be­cause it *should* have other im­pli­ca­tions, and it’s use­ful to know what they are....

      • be­cause it’ll help you ac­tu­ally re­tain the up­date (in­stead of let­ting it slide away when it be­comes so­cially/​poli­ti­cally/​emo­tion­ally/​phys­i­cally in­con­ve­nient to be­lieve it, or just for­get­ting)

    • If you no­tice your emo­tions are not in line with what you now be­lieve the truth to be (in a sys­tem-2 level), figure out why that is.

  • Notic­ing Disagree­ment and Con­fu­sion, and then putting in the work to re­solve it

  • If you have all the above skills, and your part­ner does too, and you both trust that this is the case, you can still fail to make progress if you don’t ac­tu­ally fol­low up, and sched­ule the time to talk through the is­sues thor­oughly. For deep dis­agree­ment this can take years. It may or may not be worth it. But if there are long­stand­ing dis­agree­ments that con­tin­u­ously cause strife, it may be worth­while.

Build­ing Towards Shared Norms

When smart, in­sight­ful peo­ple dis­agree, at least one of them is do­ing some­thing wrong, and it seems like we should be try­ing harder to no­tice and re­solve it.

A rough sketch of a norm I’d like to see.

Trig­ger: You’ve got­ten into a heated dis­pute where at least one per­son feels the other is ar­gu­ing in bad faith (es­pe­cially in pub­lic/​on­line set­tings)

Ac­tion: Be­fore ar­gu­ing fur­ther:

  • stop to figure out if the ar­gu­ment is even worth it

  • if so, each per­son runs through some ba­sic checks (i.e. “am *I* be­ing overly tribal/​emo­tional?)

  • in­stead of con­tin­u­ing to ar­gue in pub­lic where there’s a lot more pres­sure to not lose face, or steer so­cial norms, they con­tinue the dis­cus­sion pri­vately, in what­ever the most hu­man-cen­tric way is prac­ti­cal.

  • they talk un­til at least they suc­ceed at Step 1 Dou­ble Crux (i.e. agree on where they dis­agree, and hope­fully figure out a pos­si­ble em­piri­cal test for it). Ideally, they also come to as much agree­ment as they can.

  • Re­gard­less of how far they get, they write up a short post (maybe just a para­graph, maybe longer de­pend­ing on con­text) on what they did end up agree­ing on or figur­ing out. (The post should be some­thing they both sign off on)