Keeping Beliefs Cruxy

Pre­vi­ously in se­quence: Dou­ble­crux is for build­ing prod­ucts.

To re­cap – you might want to dou­ble­crux if ei­ther:

  • You’re build­ing a product, metaphor­i­cal or literal, and you dis­agree about how to pro­ceed.

  • You want to make your be­liefs more ac­cu­rate, and you think a par­tic­u­lar per­son you dis­agree with is likely to have use­ful in­for­ma­tion for you.

  • You just… en­joy re­solv­ing dis­agree­ments in a way that mu­tu­ally pur­sues truth for what­ever rea­son.

Re­gard­less, you might find your­self with the prob­lem:

Dou­ble­crux­ing takes a lot of time.

For a ‘se­ri­ous’ dis­agree­ment, it fre­quently takes a least an hour, and of­ten much longer. Habryka and I once took 12 hours over the course of 3 days to make any kind of progress on a par­tic­u­larly gnarly dis­agree­ment. And some­times dis­agree­ments can per­sist for years de­spite sig­nifi­cant mu­tual effort.

Now, dou­ble­crux­ing is faster than many other forms of truth-al­igned-dis­agree­ment re­s­olu­tion. I ac­tu­ally it’s helpful to think of dou­ble­crux as “the fastest way for two dis­agree­ing-but-hon­est-peo­ple to con­verge lo­cally to­wards the truth”, and if some­one came up with a faster method, I’d recom­mend de­p­re­cat­ing dou­ble­crux in fa­vor it that. (Mean­while, dou­ble­crux is not guaran­teed to be faster for 3+ peo­ple to con­verge but I still ex­pect it to be faster for small­ish groups with par­tic­u­larly con­fus­ing dis­agree­ments)

Re­gard­less, mul­ti­ple hours is a long time. Can we do bet­ter?

I think the an­swer is yes, and it ba­si­cally comes in the form of:

  • Prac­tice find­ing your own cruxes

  • Prac­tice helping other peo­ple find their cruxes

  • Develop metacog­ni­tive skills that make cruxfind­ing nat­u­ral and intuitive

  • Cach­ing the re­sults into a clearer be­lief-network

Those are all things you can do unilat­er­ally. If you get buy-in from your col­leagues, you might also try some­thing like “de­velop cul­ture that en­courages peo­ple to do those four things, and help each other to do so.”

I’d sum­ma­rize all of that as “de­velop the skill and prac­tice of keep­ing your be­liefs cruxy.

By de­fault, hu­mans form be­liefs for all kinds of rea­sons, with­out re­gard for how falsifi­able they are. The re­sult is a tan­gled, im­pen­e­tra­ble web. Pro­duc­tive dis­agree­ment takes a long time be­cause peo­ple are start­ing from the po­si­tion of “im­pen­e­tra­ble web.”

If you make a habit of ask­ing your­self “what ob­ser­va­tions would change my mind about this?”, then you gain a few benefits.

First, your be­liefs should (hope­fully?) be more en­tan­gled with re­al­ity, pe­riod. You’ll gain the skill of notic­ing how your be­liefs should con­strain your an­ti­ci­pa­tions, and then if they fail to do so, you can maybe up­date your be­liefs.

Se­cond, if you’ve cul­ti­vated that skill, then dur­ing a dou­ble­crux dis­cus­sion, you’ll have an eas­ier time en­gag­ing with the core dou­ble­crux loop. (So, a con­ver­sa­tion that might have taken an hour takes 45 min­utes – your con­ver­sa­tion part­ner might still take a long time to figure out their cruxes, but maybe you can do your own much faster)

Third, once you got­ten into this habit, this will help your be­liefs form in a cleaner, more re­al­ity-en­tan­gled fash­ion in the first place. In­stead of build­ing an im­pen­e­tra­ble morass, you’ll be build­ing a clear, leg­ible net­work. (So, you might have all your cruxes full ac­cessible from the be­gin­ning of the con­ver­sa­tion, and then it’s just a mat­ter of stat­ing them, and then helping your part­ner to do so)

[Note: I don’t think you should op­ti­mize di­rectly for your be­liefs be­ing leg­ible. This is a recipe for bury­ing illeg­ible parts of your psy­che and miss­ing im­por­tant in­for­ma­tion. But rather, if you try to ac­tu­ally un­der­stand your be­liefs and what causes them, the leg­i­bil­ity will come nat­u­rally as a side benefit]

Fi­nally, if ev­ery­one around you is do­ing, this rad­i­cally low­ers the cost of pro­duc­tive-dis­agree­ment. In­stead of tak­ing an hour (or three days), as soon as you bump into an im­por­tant dis­agree­ment you can quickly nav­i­gate through your re­spec­tive be­lief net­works, find the cruxes, and skip to the part where you ac­tu­ally Do Em­piri­cism.

I think (and here we get into spec­u­la­tion), that this can re­sult in a phase shift in how dis­agree­ment works, en­abling much more pow­er­ful dis­course sys­tems than we cur­rently have.

I think keep­ing be­liefs cruxy is a good ex­am­ple of a prac­tice that is both a valuable “Rab­bit Strat­egy”, as well as some­thing worth Stag Hunt­ing To­gether on. (In Rab­bit/​Stag par­lance, a Rab­bit strat­egy is some­thing you can just do with­out rely­ing on any­one else to help you, that will provide benefit to you. A Stag strat­egy is some­thing with a larger pay­off, but only ac­tu­ally works if you’ve co­or­di­nated with peo­ple to do so)

If you have an or­ga­ni­za­tion, com­mu­nity, or cir­cle of friends where many peo­ple have prac­ticed keep­ing-be­liefs-cruxy, I pre­dict you will find in­di­vi­d­u­als benefit­ing, as well as cre­at­ing a truth­seek­ing cul­ture more pow­er­ful than the sum of its parts.


Hope­fully ob­vi­ous ad­denda, op­er­a­tional­iz­ing:

My crux for all this is that I think this is lo­cally tractable.

My recom­men­da­tion is not that you go out of your way to prac­tice this all the time – rather, that you spend some time on-the-mar­gin prac­tic­ing the “look-for-cruxes” when­ever you ac­tu­ally have a note­wor­thy dis­agree­ment.

If you prac­tice crux-find­ing when­ever you have such a dis­agree­ment, and find that it isn’t lo­cally use­ful, I prob­a­bly wouldn’t recom­mend con­tin­u­ing.

If 10 peo­ple tried this and 6 of them told me it didn’t feel use­ful, even su­perfi­cially, I’d prob­a­bly change my mind sig­nifi­cantly. If they tried it con­tin­u­ously for a year and there weren’t at least 3 of them that could point to trans­par­ently use­ful out­comes, (such as dis­agree­ments tak­ing much less time than they’d naively pre­dict), I’d also sig­nifi­cantly change my be­liefs here.

This op­er­a­tional­iza­tion has some im­por­tant caveats, such as “I don’t nec­es­sar­ily ex­pect this to work for peo­ple who haven’t read the se­quences or some equiv­a­lent. I also think it might re­quire some skills that I haven’t writ­ten up ex­pla­na­tions for yet –which I hope to do soon as I con­tinue this se­quence.”