Contra double crux

Sum­mary: CFAR pro­poses dou­ble crux as a method to re­solve dis­agree­ment: in­stead of ar­gu­ing over some be­lief B, one should look for a crux (C) which un­der­lies it, such that if ei­ther party changed their mind over C, they would change their mind about B.

I don’t think dou­ble crux is that helpful, prin­ci­pally be­cause ‘dou­ble cruxes’ are rare in top­ics where rea­son­able peo­ple differ (and they can be asym­met­ric, be about a con­sid­er­a­tions strength rather than di­rec­tion, and so on). I sug­gest this may di­ag­nose the difficulty oth­ers have noted in get­ting dou­ble crux to ‘work’. Good philoso­phers seem to do much bet­ter than dou­ble crux­ing us­ing differ­ent ap­proaches.

I aver the strengths of dou­ble crux are pri­mar­ily other epistemic virtues, pre-req­ui­site for dou­ble crux, which are con­flated with dou­ble crux­ing it­self (e.g. it is good to have a col­lab­o­ra­tive rather than com­bat­ive mind­set when dis­agree­ing). Con­di­tional on hav­ing this pre-req­ui­site set of epistemic virtues, dou­ble crux­ing does not add fur­ther benefit, and is prob­a­bly in­fe­rior to other means of dis­cus­sion ex­em­plified by good philoso­phers. I recom­mend we look el­se­where.

What is a crux?

From Sa­bien’s ex­po­si­tion, a crux for some be­lief B is an­other be­lief C which if one changed one’s mind about C, one would change one’s mind about B. The origi­nal ex­am­ple was the im­pact of school uniforms con­ceal­ing un­helpful class dis­tinc­tions be­ing a crux for whether one sup­ports or op­poses school uniforms.

A dou­ble crux is a par­tic­u­lar case where two peo­ple dis­agree over B and have the same crux, albeit go­ing in op­po­site di­rec­tions. Say if Xe­nia be­lieves B (be­cause she be­lieves C) and Yevgeny dis­be­lieves B (be­cause he does not be­lieve C), then if Xe­nia stopped be­liev­ing C, she would stop be­liev­ing B (and thus agree with Yevgeny) and vice-versa.

How com­mon are cruxes (and dou­ble cruxes)?

I sug­gest the main prob­lem fac­ing the ‘dou­ble crux tech­nique’ is that dis­agree­ments like Xe­nia’s and Yevgeny’s, which can be even­tu­ally traced to a sin­gle un­der­ly­ing con­sid­er­a­tion, are the ex­cep­tion rather than the rule. Across most rea­son­able peo­ple on most re­con­dite top­ics, ‘cruxes’ are rare, and ‘dou­ble cruxes’ (roughly) ex­po­nen­tially rarer.

For many re­con­dite top­ics I think about, my cre­dence it in arises from the bal­ance of a va­ri­ety of con­sid­er­a­tions point­ing in ei­ther di­rec­tion. Thus whether or not I be­lieve ‘MIRI is do­ing good work’, ‘God ex­ists’, or ‘The top marginal tax rate in the UK should be higher than its cur­rent value’ does not rely on a sin­gle con­sid­er­a­tion or ar­gu­ment, but rather its sup­port is dis­tributed over a plethora of is­sues. Although in some cases un­der­cut­ting what I take as the most im­por­tant con­sid­er­a­tion would push my de­gree of be­lief over or un­der 0.5, in other cases it would not.

Thus if I meet some­one else who dis­agrees with me on (say) whether God ex­ists, it would be re­mark­able if our dis­agree­ment hinges on (for ex­am­ple) the ev­i­den­tial ar­gu­ment of evil, such that if I could per­suade them of its sound­ness they would re­nounce their faith, and vice versa. Were I per­suaded the ev­i­den­tial ar­gu­ment from evil ‘didn’t work’, I ex­pect I would re­main fairly scep­ti­cal of god’s ex­is­tence; were I to per­suade them it ‘does work’, I would not be sur­prised if they main­tained other ev­i­dence nonethe­less makes god’s ex­is­tence likely on the to­tal bal­ance of ev­i­dence. And so on and so forth for other is­sues where rea­son­able peo­ple dis­agree. I sus­pect a com­mon ex­am­ple would be rea­son­ably close agree­ment on com­mon in­for­ma­tion, yet be­liefs di­verg­ing based on ‘pri­ors’, com­prised of a melange of ex­pe­riences, gestalts, in­tu­itions, and other pieces of more ‘pri­vate’ ev­i­dence.

Aux­iliary challenges to dou­ble crux

I be­lieve there are other difficul­ties with dou­ble crux, some­what re­lated to the above:


As im­plied above, even in cases where there is a crux C for per­son X be­liev­ing B, C may not be a crux for B for per­son Y, but it might be some­thing else (A?). (Or, more gen­er­ally, X and Y’s set of cruxes are dis­joint). A worked ex­am­ple:

Carl Shul­man and I dis­agree about whether MIRI is do­ing good re­search (we have money rid­ing on it). I ex­pect if I lose the bet, I’d change my mind sub­stan­tially about the qual­ity of MIRI’s work (i.e. my view would be favourable rather than un­favourable). I don’t see this should be sym­met­ri­cal be­tween Shul­man and I. If he lost the bet, he may still have a gen­er­ally favourable view of MIRI, and a ‘crux’ for him maybe some other ev­i­dence or col­lec­tion of ev­i­dence.

X or Y may sim­ply differ in the re­silience of their cre­dence in B, such that one or the other’s be­lief shifts more on be­ing per­suaded on a par­tic­u­lar con­sid­er­a­tion. One com­moner sce­nario (in­tra-EA) would be if one is try­ing to chase ‘hits’, one is prob­a­bly more re­silient to sub­se­quent ad­verse in­for­ma­tion than the ini­tial steers that sug­gested a given thing could be hit.

A re­lated is­sue is when one per­son be­lieves they are in re­ceipt of a de­ci­sive con­sid­er­a­tion for or against B their in­ter­locu­tor is un­aware of.

‘Chang­ing one’s mind’ around p=0.5 isn’t (that) important

In most prac­ti­cal cases, a differ­ence be­tween 1 and 0.51 or 0 and 0.49 is much more im­por­tant than be­tween 0.49 and 0.51. Thus dis­agree­ments over con­fi­dence of dis/​be­lief can be more im­por­tant, even if they may not count as ‘chang­ing one’s mind’: I prob­a­bly differ more with a ‘con­vinced Athe­ist’ than an a ‘doubter who leans slightly to­wards Theism’.

Many ar­gu­ments and con­sid­er­a­tions are ab­duc­tive, and so lend strength to a par­tic­u­lar be­lief. Thus a similar challenge ap­plies to pro­posed cruxes—they may re­gard the strength, rather than di­rec­tion, of a given con­sid­er­a­tion. One could imag­ine the ‘crux’ be­tween me and the hy­po­thet­i­cal con­vinced Athe­ist is they think that the ev­i­den­tial prob­lem of evil pro­vides over­whelming dis­con­fir­ma­tion for Theism, whilst I think its per­sua­sive, per­haps de­ci­sive, but not so it drives rea­son­able cre­dence in Theism down to near-zero.

Sa­bien’s ex­po­si­tion recog­nises this, and so sug­gests one can ‘dou­ble crux’ over vary­ing cre­dences. So in this sam­ple dis­agree­ment, the be­lief is ‘Athe­ism is al­most cer­tain’, and the crux is ‘the ev­i­den­tial ar­gu­ment from evil is over­whelming’. Yet our lan­guage for cre­dences is fuzzy, and so what would be a crux for the differ­ence be­tween (say) ‘some­what con­fi­dent’ ver­sus ‘al­most cer­tain’ hard to nail down in a satis­fac­tory in­ter-sub­jec­tive way. An al­ter­na­tive where a change of raw cre­dence is ‘chang­ing ones mind’ en­tails all con­sid­er­a­tions we take to sup­port our cre­dence in a given a be­lief are cruxes.


I sug­gest these difficul­ties may make a good di­ag­no­sis for why dou­ble crux­ing has not always worked well. Anec­data seems to vary from those who have found it helpful to those who haven’t seen any benefit (but per­haps lean­ing to­wards the lat­ter), and re­marks along the lines of want­ing to see a pub­lic ex­am­ple.

Rae­mon’s sub­se­quent ex­e­ge­sis helpfully dis­t­in­guishes be­tween the ac­tual dou­ble crux tech­nique. and “the over­all pat­tern of be­havi­our sur­round­ing this Offi­cial Dou­ble Crux tech­nique”. They also offer a long list of con­sid­er­a­tions around the lat­ter which may be pre-req­ui­site for dou­ble crux­ing work­ing well (e.g. So­cial Skills, Ac­tu­ally Chang­ing your Mind, and so on).

I won­der what value dou­ble crux re­ally adds, if Rae­mon’s ar­gu­ment is on the right track. If dou­ble crux­ing re­quires many (or most) of the pre-req­ui­sites sug­gested, all dis­agree­ments con­di­tioned on meet­ing these pre-req­ui­sites will go about as well whether one uses dou­ble crux or some other in­tu­itive means of sub­se­quent dis­cus­sion.

A re­lated con­cern of mine is a ‘cas­tle-and-keep’ es­que defence of dou­ble crux which arises from equiv­o­cat­ing be­tween dou­ble crux per se and a host of ad­mirable epistemic norms it may rely upon. Thus when defended dou­ble crux may trans­mo­grify from “look for some C which if you changed your mind about you’d change your mind about B too” to a large set of in­con­tro­vert­ibly good epistemic prac­tices “It is bet­ter to be col­lab­o­ra­tive rather than com­bat­ive in dis­cus­sion, and be will­ing to change ones mind, (etc.)” Yet even if dou­ble crux­ing is as­so­ci­ated with (or re­quires) these good prac­tices, it is not a nec­es­sary con­di­tion for them.

Good philoso­phers already dis­agree bet­ter than dou­ble cruxing

To find fault, easy; to do bet­ter, difficult

- Plutarch (para­phrased)

Per Plutarch’s re­mark, any short­com­ings in dou­ble crux may count for lit­tle if it is the ‘best we’ve got’. How­ever, I be­lieve I can not only offer a bet­ter ap­proach, but this ap­proach already ex­ists ‘in the wild’. I have the for­tune of know­ing many ex­traor­di­nary able philoso­phers, and not only ob­serve their dis­cus­sions but (as they also have ex­traor­di­nary re­serves of gen­eros­ity and for­bear­ance) par­ti­ci­pate in them as well. Their ap­proach seems to do much bet­ter than re­ports of what dou­ble crux­ing ac­com­plishes.

What roughly hap­pens is some­thing like this:

  1. X and Y re­al­ize their cre­dences on some be­lief B vary con­sid­er­ably.

  2. X and Y both offer what ap­pear (to their lights) the strongest con­sid­er­a­tions that push them to a higher/​lower cre­dence on B.

  3. X and Y at­tempt to pri­ori­tize these con­sid­er­a­tions by the sen­si­tivity of cre­dence in B is to each of these, via some mix of re­silience, de­gree of dis­agree­ment over these con­sid­er­a­tions, and so forth.

  4. They then dis­cuss these in or­der of pri­or­ity, mov­ing top­ics when the likely yield drops be­low the next can­di­date with some un­der­ly­ing con­straint on time.

This ap­proach seems to avoid the ‘in the­ory’ ob­jec­tions I raise against dou­ble crux above. It seems to avoid some of the ‘in prac­tice’ prob­lems peo­ple ob­serve:

  • Th­ese dis­cus­sions of­ten oc­cur (in fenc­ing terms) at dou­ble time, and thus one tends not to flounder try­ing to find ‘dou­ble-cruxy’ is­sues. Athe­ist may en­gage Theist on at­tempt­ing to un­der­mine the free will defence to the ar­gu­ment from evil, whilst Theist may en­gage Athe­ist on the defi­cien­cies of moral-an­tire­al­ism to pre­pare ground for a moral ar­gu­ment for the ex­is­tence of god. Th­ese may be crux-y but they may be highly as­sy­met­ri­cal. Athe­ist may be a com­pat­i­bil­ist but grant liber­tar­ian free will for the sake of ar­gu­ment, for ex­am­ple: thus Athe­ist’s cre­dence in God will change lit­tle even if per­suaded the free will defence broadly ‘checks out’ if one grants liber­tar­ian free will, and vice versa.

  • Th­ese dis­cus­sions sel­dom get bogged down in fun­da­men­tal dis­agree­ments. Although Deon­tol­o­gist and Utili­tar­ian recog­nise their view on nor­ma­tive ethics is of­ten a ‘dou­ble crux’ for many ap­plied ethics ques­tions (e.g. Euthana­sia), they mu­tu­ally recog­nise their over­all view on nor­ma­tive ethics will likely be suffi­ciently re­silient such that ei­ther of them ‘chang­ing their mind’ based on a con­ver­sa­tion is low. In­stead they turn their fo­cus to other mat­ters which are less re­silient, and thus they an­ti­ci­pate a greater like­li­hood of some­one or other chang­ing their mind.

  • There ap­pears to be more re­al­is­tic ex­pec­ta­tions about the re­sult. If Utili­tar­ian and Deon­tol­o­gist do dis­cuss the mer­its of util­i­tar­i­anism ver­sus (say) kan­ti­anism, there’s lit­tle ex­pec­ta­tion of ‘re­solv­ing their dis­agree­ment’ or that they will find or mu­tu­ally cru­cial con­sid­er­a­tions. Rather they pick at a par­tic­u­lar lead­ing con­sid­er­a­tion on ei­ther side and see whether it may change their con­fi­dence (this is broadly re­flected in the philo­soph­i­cal liter­a­ture: pa­pers tend to con­cern par­tic­u­lar ar­gu­ments or con­sid­er­a­tions, rather then offer­ing all things con­sid­ered de­ter­mi­na­tions of broad re­con­dite philo­soph­i­cal po­si­tions).

  • There ap­pear to be bet­ter stop­ping rules. On the nu­mer­ous oc­ca­sions where I’m not aware of a par­tic­u­larly im­por­tant con­sid­er­a­tion, it of­ten seems a bet­ter use of ev­ery­one’s time for me to read about this in the rele­vant liter­a­ture rather than con­tin­u­ing to dis­cuss (I’d guess ‘read­ing a book’ beats ‘dis­cussing with some­one you dis­agree with’ on get­ting more ac­cu­rate be­liefs about a topic per unit time sur­pris­ingly of­ten).

Coda: Where­fore dou­ble crux?

It is per­haps pos­si­ble for dou­ble crux to be ex­panded or al­tered to cap­ture the form of dis­cus­sion I point to above, and per­haps one can re­cast all the benefi­cial char­ac­ter­is­tics I sug­gest in dou­ble crux ver­biage. Yet such a pro­gram ap­pears a fool’s er­rand: the core of the idea of dou­ble crux in­tro­duced at the top of the post is dis­tinct from gen­er­ally laud­able epistemic norms (c.f. In­ter­mezzo, supra), but also the prac­tices of the elite cog­nisers I point to­wards in the sec­tion above. A con­cept of dou­ble crux so al­tered to in­cor­po­rate these things is epiphe­nom­e­nal—the en­g­ine which is driv­ing the bet­ter dis­agree­ment is sim­ply those other prin­ci­ples and prac­tices dou­ble crux has now ap­pro­pri­ated, and its chief re­sult is to add ter­minolog­i­cal over­head, and, per­haps, in­apt ap­prox­i­ma­tion.

I gen­er­ally think the ra­tio­nal­ist com­mu­nity already labours un­der too much bloated jar­gon: words and phrases which are hard for out­siders to un­der­stand, and yet do not en­code par­tic­u­larly hard or deep con­cepts. I’d ad­vise against fur­ther ad­di­tions to the lex­i­con. ‘Look for key con­sid­er­a­tions’ cap­tures the key mo­ti­va­tion for dou­ble crux bet­ter than ‘dou­ble crux’ it­self, and its mean­ing is clear.

The prac­tices of ex­cep­tional philoso­phers set a high bar: these are peo­ple se­lected for, and who prac­tice heav­ily, ar­gu­ment and dis­agree­ment. It is al­most con­ceiv­able that they are bet­ter than this than even the ra­tio­nal­ist com­mu­nity, notwith­stand­ing the vast and ir­refragable ev­i­dence of this group’s ex­cel­lence across so many do­mains. Dou­ble crux could still have ped­a­gog­i­cal value: it might be a tech­nique which cul­ti­vates bet­ter epistemic prac­tices, even if those who en­joy ex­cel­lent epistemic prac­tices have a bet­ter al­ter­na­tive. Yet this does not seem the origi­nal in­tent, nor does there ap­pear much ev­i­dence of this benefit.

In the in­tro­duc­tion to dou­ble crux, Sa­bien wrote that the core con­cept was ‘fairly set­tled’. In con­clu­sion he writes:

We think dou­ble crux is su­per sweet. To the ex­tent that you see flaws in it, we want to find them and re­pair them, and we’re cur­rently bet­ting that re­pairing and re­fin­ing dou­ble crux is go­ing to pay off bet­ter than try some­thing to­tally differ­ent. [em­pha­sis in origi­nal]

I re­spect­fully dis­agree. I see con­sid­er­able flaws in dou­ble crux, which I don’t think have much prospect of ad­e­quate re­pair. Would that time and effort be spent bet­ter look­ing el­se­where.