Double Crux — A Strategy for Resolving Disagreement


Dou­ble crux is one of CFAR’s newer con­cepts, and one that’s forced a re-ex­am­i­na­tion and re­fac­tor­ing of a lot of our cur­ricu­lum (in the same way that the in­tro­duc­tion of TAPs and In­ner Si­mu­la­tor did pre­vi­ously). It rapidly be­came a part of our or­ga­ni­za­tional so­cial fabric, and is one of our high­est-EV threads for out­reach and dis­sem­i­na­tion, so it’s long over­due for a pub­lic, for­mal ex­pla­na­tion.

Note that while the core con­cept is fairly set­tled, the ex­e­cu­tion re­mains some­what in flux, with no­table ex­per­i­men­ta­tion com­ing from Ju­lia Galef, Kenzi Amodei, An­drew Critch, Eli Tyre, Anna Sala­mon, my­self, and oth­ers. Be­cause of that, this post will be less of a cake and more of a folk recipe—this is long and me­an­der­ing on pur­pose, be­cause the pri­or­ity is to trans­mit the gen­er­a­tors of the thing over the thing it­self. Ac­cord­ingly, if you think you see stuff that’s wrong or miss­ing, you’re prob­a­bly onto some­thing, and we’d ap­pre­ci­ate hav­ing them added here as com­men­tary.

Ca­sus belli

To a first ap­prox­i­ma­tion, a hu­man can be thought of as a black box that takes in data from its en­vi­ron­ment, and out­puts be­liefs and be­hav­iors (that black box isn’t re­ally “opaque” given that we do have ac­cess to a lot of what’s go­ing on in­side of it, but our un­der­stand­ing of our own cog­ni­tion seems un­con­tro­ver­sially in­com­plete).

When two hu­mans dis­agree—when their black boxes out­put differ­ent an­swers, as be­low—there are of­ten a hand­ful of un­pro­duc­tive things that can oc­cur.

The most ob­vi­ous (and tire­some) is that they’ll sim­ply re­peat­edly bash those out­puts to­gether with­out mak­ing any progress (think most dis­agree­ments over sports or poli­tics; the peo­ple above just shout­ing “tri­an­gle!” and “cir­cle!” louder and louder). On the sec­ond level, peo­ple can (and of­ten do) take the differ­ence in out­put as ev­i­dence that the other per­son’s black box is bro­ken (i.e. they’re bad, dumb, crazy) or that the other per­son doesn’t see the uni­verse clearly (i.e. they’re bi­ased, oblivi­ous, un­ob­ser­vant). On the third level, peo­ple will of­ten agree to dis­agree, a move which pre­serves the so­cial fabric at the cost of truth-seek­ing and ac­tual progress.

Dou­ble crux in the ideal solves all of these prob­lems, and in prac­tice even fum­bling and in­ex­pert steps to­ward that ideal seem to pro­duce a lot of marginal value, both in in­creas­ing un­der­stand­ing and in de­creas­ing con­flict-due-to-dis­agree­ment.


This post will oc­ca­sion­ally delineate two ver­sions of dou­ble crux: a strong ver­sion, in which both par­ties have a shared un­der­stand­ing of dou­ble crux and have ex­plic­itly agreed to work within that frame­work, and a weak ver­sion, in which only one party has ac­cess to the con­cept, and is at­tempt­ing to im­prove the con­ver­sa­tional dy­namic unilat­er­ally.

In ei­ther case, the fol­low­ing things seem to be re­quired:

  • Epistemic hu­mil­ity. The num­ber one foun­da­tional back­bone of ra­tio­nal­ity seems, to me, to be how read­ily one is able to think “It’s pos­si­ble that I might be the one who’s wrong, here.” Viewed an­other way, this is the abil­ity to take one’s be­liefs as ob­ject, rather than be­ing sub­ject to them and un­able to set them aside (and then try on some other be­lief and pro­duc­tively imag­ine “what would the world be like if this were true, in­stead of that?”).

  • Good faith. An as­sump­tion that peo­ple be­lieve things for causal rea­sons; a recog­ni­tion that hav­ing been ex­posed to the same set of stim­uli would have caused one to hold ap­prox­i­mately the same be­liefs; a de­fault stance of hold­ing-with-skep­ti­cism what seems to be ev­i­dence that the other party is bad or wants the world to be bad (be­cause as mon­keys it’s not hard for us to con­vince our­selves that we have such ev­i­dence when we re­ally don’t).1

  • Con­fi­dence in the ex­is­tence of ob­jec­tive truth. I was tempted to call this “ob­jec­tivity,” “em­piri­cism,” or “the Mulder prin­ci­ple,” but in the end none of those quite fit. In essence: a con­vic­tion that for al­most any well-defined ques­tion, there re­ally truly is a clear-cut an­swer. That an­swer may be im­prac­ti­cally or even im­pos­si­bly difficult to find, such that we can’t ac­tu­ally go look­ing for it and have to fall back on heuris­tics (e.g. how many grasshop­pers are al­ive on Earth at this ex­act mo­ment, is the color or­ange su­pe­rior to the color green, why isn’t there an au­dio book of Fight Club nar­rated by Ed­ward Nor­ton), but it nev­er­the­less ex­ists.

  • Cu­ri­os­ity and/​or a de­sire to un­cover truth. Origi­nally, I had this listed as truth-seek­ing alone, but my col­leagues pointed out that one can move in the right di­rec­tion sim­ply by be­ing cu­ri­ous about the other per­son and the con­tents of their map, with­out fo­cus­ing di­rectly on the ter­ri­tory.

At CFAR work­shops, we hit on the first and sec­ond through spe­cific lec­tures, the third through os­mo­sis, and the fourth through os­mo­sis and a lot of re­la­tional dy­nam­ics work that gets peo­ple cu­ri­ous and com­fortable with one an­other. Other qual­ities (such as the abil­ity to reg­u­late and tran­scend one’s emo­tions in the heat of the mo­ment, or the abil­ity to com­mit to a thought ex­per­i­ment and re­ally wres­tle with it) are also helpful, but not as crit­i­cal as the above.

How to play

Let’s say you have a be­lief, which we can la­bel A (for in­stance, “mid­dle school stu­dents should wear uniforms”), and that you’re in dis­agree­ment with some­one who be­lieves some form of ¬A. Dou­ble crux­ing with that per­son means that you’re both in search of a sec­ond state­ment B, with the fol­low­ing prop­er­ties:

  • You and your part­ner both dis­agree about B as well (you think B, your part­ner thinks ¬B).

  • The be­lief B is cru­cial for your be­lief in A; it is one of the cruxes of the ar­gu­ment. If it turned out that B was not true, that would be suffi­cient to make you think A was false, too.

  • The be­lief ¬B is cru­cial for your part­ner’s be­lief in ¬A, in a similar fash­ion.

In the ex­am­ple about school uniforms, B might be a state­ment like “uniforms help smooth out un­helpful class dis­tinc­tions by mak­ing it harder for rich and poor stu­dents to judge one an­other through cloth­ing,” which your part­ner might sum up as “op­ti­mistic bul­lshit.” Ideally, B is a state­ment that is some­what closer to re­al­ity than A—it’s more con­crete, grounded, well-defined, dis­cov­er­able, etc. It’s less about prin­ci­ples and summed-up, in­duced con­clu­sions, and more of a glimpse into the struc­ture that led to those con­clu­sions.

(It doesn’t have to be con­crete and dis­cov­er­able, though—of­ten af­ter find­ing B it’s pro­duc­tive to start over in search of a C, and then a D, and then an E, and so forth, un­til you end up with some­thing you can re­search or run an ex­per­i­ment on).

At first glance, it might not be clear why sim­ply find­ing B counts as vic­tory—shouldn’t you set­tle B, so that you can con­clu­sively choose be­tween A and ¬A? But it’s im­por­tant to rec­og­nize that ar­riv­ing at B means you’ve already dis­solved a sig­nifi­cant chunk of your dis­agree­ment, in that you and your part­ner now share a be­lief about the causal na­ture of the uni­verse.

If B, then A. Fur­ther­more, if ¬B, then ¬A. You’ve both agreed that the states of B are cru­cial for the states of A, and in this way your con­tin­u­ing “agree­ment to dis­agree” isn’t just “well, you take your truth and I’ll take mine,” but rather “okay, well, let’s see what the ev­i­dence shows.” Progress! And (more im­por­tantly) col­lab­o­ra­tion!


This is where CFAR’s ver­sions of the dou­ble crux unit are cur­rently weak­est—there’s some form of magic in the search for cruxes that we haven’t quite locked down. In gen­eral, the method is “search through your cruxes for ones that your part­ner is likely to dis­agree with, and then com­pare lists.” For some peo­ple and some top­ics, clearly iden­ti­fy­ing your own cruxes is easy; for oth­ers, it very quickly starts to feel like one’s po­si­tion is fun­da­men­tal/​ob­jec­tive/​un-break-down­able.


  • In­crease notic­ing of sub­tle tastes, judg­ments, and “karma scores.” Often, peo­ple sup­press a lot of their opinions and judg­ments due to so­cial mores and so forth. Gen­er­ally loos­en­ing up one’s in­ner cen­sors can make it eas­ier to no­tice why we think X, Y, or Z.

  • Look for­ward rather than back­ward. In places where the ques­tion “why?” fails to pro­duce mean­ingful an­swers, it’s of­ten more pro­duc­tive to try mak­ing pre­dic­tions about the fu­ture. For ex­am­ple, I might not know why I think school uniforms are a good idea, but if I turn on my nar­ra­tive en­g­ine and start de­scribing the bet­ter world I think will re­sult, I can of­ten sort of feel my way to­ward the un­der­ly­ing causal mod­els.

  • Nar­row the scope. A spe­cific test case of “Steve should’ve said hello to us when he got off the ele­va­tor yes­ter­day” is eas­ier to wres­tle with than “Steve should be more so­cia­ble.” Similarly, it’s of­ten eas­ier to an­swer ques­tions like “How much of our next $10,000 should we spend on re­search, as op­posed to ad­ver­tis­ing?” than to an­swer “Which is more im­por­tant right now, re­search or ad­ver­tis­ing?”

  • Do “Fo­cus­ing” and other res­o­nance checks. It’s of­ten use­ful to try on a per­spec­tive, hy­po­thet­i­cally, and then pay at­ten­tion to your in­tu­ition and bod­ily re­sponses to re­fine your ac­tual stance. For in­stance: (wildly as­serts) “I bet if ev­ery­one wore uniforms there would be a fifty per­cent re­duc­tion in bul­ly­ing.” (pauses, listens to in­ner doubts) “Ac­tu­ally, scratch that—that doesn’t seem true, now that I say it out loud, but there is some­thing in the vein of re­duc­ing overt bul­ly­ing, maybe?”

  • Seek cruxes in­de­pen­dently be­fore an­chor­ing on your part­ner’s thoughts. This one is fairly straight­for­ward. It’s also worth not­ing that if you’re at­tempt­ing to find dis­agree­ments in the first place (e.g. in or­der to prac­tice dou­ble crux­ing with friends) this is an ex­cel­lent way to start—give ev­ery­one the same ten or fif­teen open-ended ques­tions, and have ev­ery­one write down their own an­swers based on their own think­ing, crys­tal­liz­ing opinions be­fore open­ing the dis­cus­sion.

Over­all, it helps to keep the ideal of a perfect dou­ble crux in the front of your mind, while hold­ing the re­al­ities of your ac­tual con­ver­sa­tion some­what sep­a­rate. We’ve found that, at any given mo­ment, in­creas­ing the “dou­ble crux­i­ness” of a con­ver­sa­tion tends to be use­ful, but wor­ry­ing about how far from the ideal you are in ab­solute terms doesn’t. It’s all about do­ing what’s use­ful and pro­duc­tive in the mo­ment, and that of­ten means mak­ing sane com­pro­mises—if one of you has clear cruxes and the other is flounder­ing, it’s fine to fo­cus on one side. If nei­ther of you can find a sin­gle crux, but in­stead each of you has some­thing like eight co-cruxes of which any five are suffi­cient, just say so and then move for­ward in what­ever way seems best.

(Var­i­ant: a “trio” dou­ble crux con­ver­sa­tion in which, at any given mo­ment, if you’re the least-ac­tive par­ti­ci­pant, your job is to squint at your two part­ners and try to model what each of them is say­ing, and where/​why/​how they’re talk­ing past one an­other and failing to see each other’s points. Once you have a rough “trans­la­tion” to offer, do so—at that point, you’ll likely be­come more cen­tral to the con­ver­sa­tion and some­one else will ro­tate out into the squin­ter/​trans­la­tor role.)

Ul­ti­mately, each move should be in ser­vice of re­vers­ing the usual an­tag­o­nis­tic, war­like, “win at all costs” dy­namic of most dis­agree­ments. Usu­ally, we spend a sig­nifi­cant chunk of our men­tal re­sources guess­ing at the shape of our op­po­nent’s be­lief struc­ture, form­ing hy­pothe­ses about what things are cru­cial and lob­bing ar­gu­ments at them in the hopes of knock­ing the whole ed­ifice over. Mean­while, we’re in­cen­tivized to obfus­cate our own be­lief struc­ture, so that our op­po­nent’s at­tacks will be in­effec­tive.

(This is also ter­rible be­cause it means that we of­ten fail to even find the crux of the ar­gu­ment, and waste time in the weeds. If you’ve ever had the ex­pe­rience of awk­wardly fid­get­ing while some­one spends ten min­utes as­sem­bling a con­clu­sive proof of some tan­gen­tial sub-point that never even had the po­ten­tial of chang­ing your mind, then you know the value of some­one be­ing will­ing to say “Nope, this isn’t go­ing to be rele­vant for me; try speak­ing to that in­stead.”)

If we can move the de­bate to a place where, in­stead of fight­ing over the truth, we’re col­lab­o­rat­ing on a search for un­der­stand­ing, then we can re­coup a lot of wasted re­sources. You have a tremen­dous com­par­a­tive ad­van­tage at know­ing the shape of your own be­lief struc­ture—if we can switch to a mode where we’re each look­ing in­ward and can­didly shar­ing in­sights, we’ll move for­ward much more effi­ciently than if we’re each en­gaged in guess­work about the other per­son. This re­quires that we want to know the ac­tual truth (such that we’re in­cen­tivized to seek out flaws and falsify wrong be­liefs in our­selves just as much as in oth­ers) and that we feel emo­tion­ally and so­cially safe with our part­ner, but there’s a dou­bly-causal dy­namic where a tiny bit of dou­ble crux spirit up front can pro­duce safety and truth-seek­ing, which al­lows for more dou­ble crux, which pro­duces more safety and truth-seek­ing, etc.


First and fore­most, it mat­ters whether you’re in the strong ver­sion of dou­ble crux (co­op­er­a­tive, con­sent-based) or the weak ver­sion (you, as an agent, try­ing to im­prove the con­ver­sa­tional dy­namic, pos­si­bly in the face of di­rect op­po­si­tion). In par­tic­u­lar, if some­one is cur­rently riled up and con­ceives of you as rude/​hos­tile/​the en­emy, then say­ing some­thing like “I just think we’d make bet­ter progress if we talked about the un­der­ly­ing rea­sons for our be­liefs” doesn’t sound like a plea for co­op­er­a­tion—it sounds like a trap.

So, if you’re in the weak ver­sion, the pri­mary strat­egy is to em­body the ques­tion “What do you see that I don’t?” In other words, ap­proach from a place of ex­plicit hu­mil­ity and good faith, draw­ing out their be­lief struc­ture for its own sake, to see and ap­pre­ci­ate it rather than to un­der­mine or at­tack it. In my ex­pe­rience, peo­ple can “smell it” if you’re just play­ing at good faith to get them to ex­pose them­selves; if you’re hav­ing trou­ble re­ally get­ting into the spirit, I recom­mend med­i­tat­ing on times in your past when you were em­bar­rass­ingly wrong, and how you felt prior to re­al­iz­ing it com­pared to af­ter re­al­iz­ing it.

(If you’re un­able or un­will­ing to swal­low your pride or set aside your sense of jus­tice or fair­ness hard enough to re­ally do this, that’s ac­tu­ally fine; not ev­ery dis­agree­ment benefits from the dou­ble-crux-na­ture. But if your ac­tual goal is im­prov­ing the con­ver­sa­tional dy­namic, then this is a cost you want to be pre­pared to pay—go­ing the ex­tra mile, be­cause a) go­ing what feels like an ap­pro­pri­ate dis­tance is more of­ten an un­der­shoot, and b) go­ing an ac­tu­ally ap­pro­pri­ate dis­tance may not be enough to over­turn their en­trenched model in which you are The Enemy. Pa­tience- and san­ity-in­duc­ing rit­u­als recom­mended.)

As a fur­ther tip that’s good for ei­ther ver­sion but par­tic­u­larly im­por­tant for the weak one, model the be­hav­ior you’d like your part­ner to ex­hibit. Ex­pose your own be­lief struc­ture, show how your own be­liefs might be falsified, high­light points where you’re un­cer­tain and visi­bly in­te­grate their per­spec­tive and in­for­ma­tion, etc. In par­tic­u­lar, if you don’t want peo­ple run­ning amok with wrong mod­els of what’s go­ing on in your head, make sure you’re not act­ing like you’re the au­thor­ity on what’s go­ing on in their head.

Speak­ing of non-se­quiturs, be­ware of get­ting lost in the fog. The very first step in dou­ble crux should always be to op­er­a­tional­ize and clar­ify terms. Try at­tach­ing num­bers to things rather than us­ing mis­in­ter­pretable qual­ifiers; try to talk about what would be ob­serv­able in the world rather than how things feel or what’s good or bad. In the school uniforms ex­am­ple, say­ing “uniforms make stu­dents feel bet­ter about them­selves” is a start, but it’s not enough, and go­ing fur­ther into quan­tifi­a­bil­ity (if you think you could ac­tu­ally get num­bers some­day) would be even bet­ter. Often, dis­agree­ments will “dis­solve” as soon as you re­move am­bi­guity—this is suc­cess, not failure!

Fi­nally, use pa­per and pen­cil, or white­boards, or get peo­ple to treat spe­cific pre­dic­tions and con­clu­sions as im­mutable ob­jects (if you or they want to change or up­date the word­ing, that’s en­couraged, but make sure that at any given mo­ment, you’re work­ing with a clear, un­am­bigu­ous state­ment). Part of the value of dou­ble crux is that it’s the op­po­site of the weaselly, score-points, hide-in-am­bi­guity-and-look-clever dy­namic of, say, a pub­lic poli­ti­cal de­bate. The goal is to have ev­ery­one un­der­stand, at all times and as much as pos­si­ble, what the other per­son is ac­tu­ally try­ing to say—not to try to get a straw ver­sion of their ar­gu­ment to stick to them and make them look silly. Rec­og­nize that you your­self may be tempted or in­cen­tivized to fall back to that fa­mil­iar, fun dy­namic, and take steps to keep your­self in “scout mind­set” rather than “sol­dier mind­set.”


This is the dou­ble crux al­gorithm as it cur­rently ex­ists in our hand­book. It’s not strictly con­nected to all of the dis­cus­sion above; it was de­signed to be read in con­text with an hour-long lec­ture and sev­eral prac­tice ac­tivi­ties (so it has some holes and weird­nesses) and is pre­sented here more for com­plete­ness and as food for thought than as an ac­tual con­clu­sion to the above.

1. Find a dis­agree­ment with an­other person

  • A case where you be­lieve one thing and they be­lieve the other

  • A case where you and the other per­son have differ­ent con­fi­dences (e.g. you think X is 60% likely to be true, and they think it’s 90%)

2. Oper­a­tional­ize the disagreement

  • Define terms to avoid get­ting lost in se­man­tic con­fu­sions that miss the real point

  • Find spe­cific test cases—in­stead of (e.g.) dis­cussing whether you should be more out­go­ing, in­stead eval­u­ate whether you should have said hello to Steve in the office yes­ter­day morning

  • Wher­ever pos­si­ble, try to think in terms of ac­tions rather than be­liefs—it’s eas­ier to eval­u­ate ar­gu­ments like “we should do X be­fore Y” than it is to con­verge on “X is bet­ter than Y.”

3. Seek dou­ble cruxes

  • Seek your own cruxes in­de­pen­dently, and com­pare with those of the other per­son to find overlap

  • Seek cruxes col­lab­o­ra­tively, by mak­ing claims (“I be­lieve that X will hap­pen be­cause Y”) and fo­cus­ing on falsifi­a­bil­ity (“It would take A, B, or C to make me stop be­liev­ing X”)

4. Resonate

  • Spend time “in­hab­it­ing” both sides of the dou­ble crux, to con­firm that you’ve found the core of the dis­agree­ment (as op­posed to some­thing that will ul­ti­mately fail to pro­duce an up­date)

  • Imag­ine the re­s­olu­tion as an if-then state­ment, and use your in­ner sim and other checks to see if there are any un­spo­ken hes­i­ta­tions about the truth of that statement

5. Re­peat!


We think dou­ble crux is su­per sweet. To the ex­tent that you see flaws in it, we want to find them and re­pair them, and we’re cur­rently bet­ting that re­pairing and re­fin­ing dou­ble crux is go­ing to pay off bet­ter than try some­thing to­tally differ­ent. In par­tic­u­lar, we be­lieve that em­brac­ing the spirit of this men­tal move has huge po­ten­tial for un­lock­ing peo­ple’s abil­ities to wres­tle with all sorts of com­plex and heavy hard-to-parse top­ics (like ex­is­ten­tial risk, for in­stance), be­cause it pro­vides a for­mat for hold­ing a bunch of partly-wrong mod­els at the same time while you dis­till the value out of each.

Com­ments ap­pre­ci­ated; cri­tiques highly ap­pre­ci­ated; anec­do­tal data from ex­per­i­men­tal at­tempts to teach your­self dou­ble crux, or teach it to oth­ers, or use it on the down-low with­out tel­ling other peo­ple what you’re do­ing ex­tremely ap­pre­ci­ated.

- Dun­can Sabien

[1]One rea­son good faith is im­por­tant is that even when peo­ple are “wrong,” they are usu­ally par­tially right—there are flecks of gold mixed in with their false be­lief that can be pro­duc­tively mined by an agent who’s in­ter­ested in get­ting the whole pic­ture. Nor­mal dis­agree­ment-nav­i­ga­tion meth­ods have some ten­dency to throw out that gold, ei­ther by al­low­ing ev­ery­one to pro­tect their origi­nal be­lief set or by re­plac­ing ev­ery­one’s view with whichever view is shown to be “best,” thereby throw­ing out data, caus­ing in­for­ma­tion cas­cades, dis­in­cen­tiviz­ing “notic­ing your con­fu­sion,” etc.

The cen­tral as­sump­tion is that the uni­verse is like a large and com­plex maze that each of us can only see parts of. To the ex­tent that lan­guage and com­mu­ni­ca­tion al­low us to gather info about parts of the maze with­out hav­ing to in­ves­ti­gate them our­selves, that’s great. But when we dis­agree on what to do be­cause we each see a differ­ent slice of re­al­ity, it’s nice to adopt meth­ods that al­low us to in­te­grate and syn­the­size, rather than meth­ods that force us to pick and pare down. It’s like the parable of the three blind men and the elephant—when­ever pos­si­ble, avoid gen­er­at­ing a bot­tom-line con­clu­sion un­til you’ve ac­counted for all of the available data.

The agent at the top mis­tak­enly be­lieves that the cor­rect move is to head to the left, since that seems to be the most di­rect path to­ward the goal. The agent on the right can see that this is a mis­take, but it would never have been able to nav­i­gate to that par­tic­u­lar node of the maze on its own.