Doublecrux is for Building Products


2 years ago, CFAR’s dou­ble­crux tech­nique seemed “prob­a­bly good” to me, but I hadn’t re­ally stress tested it. And it was par­tic­u­larly hard to learn in iso­la­tion with­out a “real” dis­agree­ment to work on.

Mean­while, some peo­ple seemed skep­ti­cal about it, and I wasn’t sure what to say to them other than “I dunno man this just seems ob­vi­ously good? Of *course* you want to treat dis­agree­ments like an op­por­tu­nity to find truth to­gether, share mod­els, and look for em­piri­cal tests you can run?”

But for peo­ple who didn’t share that “of course”, that wasn’t very helpful.

For the past two years I’ve worked on a team where big dis­agree­ments come up pretty fre­quently, and where dou­ble­crux has been more demon­stra­bly helpful. I have a clearer sense of where and when the tech­nique is im­por­tant.

In­tractable Disagreements

Some in­tractable dis­agree­ments are fine

If you dis­agree with some­one on the in­ter­net, or a ran­dom coworker or some­thing, of­ten the dis­agree­ment doesn’t mat­ter. You and your col­league will go about their lives, one way or an­other. If you and your friends are fight­ing over “Who would win, Bat­man or Su­per­man?”, com­ing to a clear re­s­olu­tion just isn’t the point.

It might also be that you and your col­league are do­ing some sort of coal­i­tion-poli­tics fight over the over­ton win­dow, and most of the de­bate might be for the pur­pose of in­fluenc­ing the pub­lic. Or, you might be ar­gu­ing about the Blue Tribe vs Red tribe as a way of sig­nal­ing group af­fili­a­tion, and earnestly un­der­stand­ing peo­ple isn’t the point.

This makes me sad, but I think it’s un­der­stand­able and some­times it’s even ac­tu­ally im­por­tant.

Such con­ver­sa­tions are don’t need to be dou­ble­crux shaped, un­less both par­ti­ci­pants want them to be.

Some dis­agree­ments are not fine

When you’re build­ing a product to­gether, it ac­tu­ally mat­ters that you figure out how to re­solve in­tractable dis­agree­ments.

I mean “product” here pretty broadly – any­thing that some­body is ac­tu­ally go­ing to use. It could be a literal app or wid­get, or an event, or a set of com­mu­nity norms, or a philo­soph­i­cal idea. You might liter­ally sell it or just use it your­self. But I think there is some­thing helpful about the “what if we were cowork­ers, how would we re­solve this?” frame.

The im­por­tant thing is “there is a group of peo­ple col­lab­o­rat­ing on it” and “there is a stake­holder who cares about it get­ting built.”

If you’re build­ing a web­site, and one per­son thinks it should pre­sent all in­for­ma­tion very densely, and an­other per­son thinks it should be sleek and min­i­mal­ist… some­how you need to ac­tu­ally de­cide what de­sign philos­o­phy to pur­sue. Op­tions in­clude (not nec­es­sar­ily limited to)

  • Anarchy

  • One per­son is in charge

  • Two or more peo­ple come to consensus

  • Peo­ple have do­main spe­cial­iza­tions in which one per­son is in charge (or gets veto power).


To start with, what’s wrong with the “ev­ery­one just builds what seems right to them and you hope it works out” op­tion? Some­times you’re build­ing a bazaar, not a cathe­dral, and this is ac­tu­ally fine. But it of­ten re­sults in differ­ent teams build­ing differ­ent tools at cross pur­pose, wast­ing mo­tion.

One per­son in charge?

In a hi­er­ar­chi­cal com­pany, maybe there’s a boss. If the de­ci­sion is about whether to paint a bikeshed red or blue, the boss can just say “red”, and things move on.

This is less straight­for­ward in the case of “min­i­mal­ism” vs “high in­for­ma­tion den­sity.”

First, is the boss even do­ing any de­sign work? What if the boss and the lead de­signer dis­agree about aes­thet­ics? If the lead de­signer hates min­i­mal­ism they’re gonna have a bad time.

Maybe the boss trusts the lead de­signer enough to differ to them on aes­thet­ics. Now the lead de­signer is the de­ci­sion maker. This is an im­prove­ment, but just punts the prob­lem down one level. If the lead de­signer is Just In Charge, a few things can still go wrong:

Other work­ers don’t ac­tu­ally un­der­stand min­i­mal­ism

“Min­i­mal­ist web­sites” and “in­for­ma­tion dense web­sites” are de­signed very differ­ently. This filters into lots of small de­sign de­ci­sions. Some­times you can solve this with a com­pre­hen­sive style guide. But those are a lot of work to cre­ate. And if you’re a small startup (or a small team within a larger com­pany), you may not have have the re­sources for that. It’d be nice if your em­ploy­ees just ac­tu­ally un­der­stood min­i­mal­ism so they could build good min­i­mal­ist com­po­nents.

The lead de­signer is wrong

Some­times the boss’s aes­thetic isn’t lo­cally op­ti­mal, and this ac­tu­ally needs to be pointed out. If lead-de­signer Alice says “we’re build­ing a min­i­mal­ist web­site” it might be im­por­tant for an­other en­g­ineer or de­signer to say “Alice, you’re mak­ing weird trade­offs for min­i­mal­ism that are harm­ing the user ex­pe­rience.”

Alice might think “Nah, you’re wrong about those trade­offs. Min­i­mal­ism is great and his­tory will bear me out on this.” But Alice might also re­spect Bob’s opinion enough to want to come to some kind of prin­ci­pled re­s­olu­tion. If Bob’s been right about similar things be­fore, what should Alice and Bob do, if Alice wants to find out she’s wrong – if and only if she’s ac­tu­ally wrong, and that her min­i­mal­ist aes­thetic is harm­ing the user ex­pe­rience.

The lead de­signer is right, but other ma­jor stake­hold­ers think she’s wrong

Alter­nately, maybe Bob thinks Alice is mak­ing bad de­sign calls, but Alice is ac­tu­ally just mak­ing the right calls. Bob has rare prefer­ences that don’t over­lap much with the av­er­age user, that shouldn’t ne­ces­si­tate a ma­jor de­sign over­haul.

Ini­tially, this will look the same to both par­ties as the pre­vi­ous op­tion.

If Alice has listened to Bob’s com­plaints a bunch, and Alice gen­er­ally re­spects Bob but thinks he’s wrong here, at some point she needs to say “Look Bob, we just need to ac­tu­ally build the damn product now, we can’t re­hash the min­i­mal­ism ar­gu­ment ev­ery time we build a new wid­get.”

I think it’s use­ful for Bob to gain the skill of say­ing “Okay. fine.” Let go of his frus­tra­tion and em­brace the de­sign paradigm.

But that’s a tough skill. And mean­while, Bob is prob­a­bly go­ing to spend a fair amount of time and en­ergy be­ing an­noyed about hav­ing to build a product they’re less ex­cited about. And some­times, Bob’s work is less effi­cient be­cause he doesn’t un­der­stand min­i­mal­ism and keeps build­ing site-com­po­nents sub­tly in­com­pat­i­ble with it.

What if there was a pro­cess by which ei­ther Alice would up­date or Bob would up­date, that both Alice and Bob con­sid­ered fair?

You might just call that pro­cess “reg­u­lar de­bate.” But the prob­lem is that reg­u­lar de­bate just of­ten doesn’t work. Alice says “We need X, be­cause Y”. Bob says “No, we need A, be­cause B”, and they some­how both re­peat those points over and over with­out ever chang­ing each other’s mind.

This wastes loads of time, which could have been bet­ter spent build­ing new site fea­tures if they were able to do it faster.

Even if Alice is in charge and gets fi­nal say, it’s still sub­op­ti­mal for Bob to have lower morale and keep mak­ing sub­tly wrong wid­gets.

And even if Bob un­der­stands that Alice is in charge, it might still be sub­op­ti­mal for Bob to feel like Alice never re­ally un­der­stood ex­actly what Bob’s con­cerns were.

What if there’s no boss?

Maybe your “com­pany” is just two friends in a base­ment do­ing a pro­ject to­gether, and there isn’t re­ally a boss. In this case, the prob­lem is much sharper – some­how you need to ac­tu­ally make a call.

You might solve this by de­cid­ing to ap­point a de­ci­sion-maker – change the situ­a­tion from a “no boss” to “boss” prob­lem. But if you were just two friends mak­ing a game to­gether in their spare time, for fun, this might kinda suck. (If the whole point was to make it to­gether as friends, a hi­er­ar­chi­cal sys­tem may be fun­da­men­tally un-fun and defeat the point)

You might be do­ing a more se­ri­ous pro­ject, where you agree that it’s im­por­tant to have clear co­or­di­na­tion pro­to­cols and hi­er­ar­chy. But it nonethe­less feels pre­ma­ture to com­mit to “Alice is always in charge of de­sign de­ci­sions.” Espe­cially if Bob and Alice both have rea­son­able de­sign skills. And es­pe­cially if it’s early on in the pro­ject and they haven’t yet de­cided what their product’s de­sign philos­o­phy should be.

In that case, you can start with straight­for­ward de­bate, or mak­ing a pros/​cons list, or ex­plor­ing the space a bit and hop­ing you come to agree­ment. But if you’re not com­ing to agree­ment… well, you need to do some­thing.

If “reg­u­lar de­bate” is work­ing for you, cool.

If “just talk­ing about the prob­lem” is work­ing, ob­vi­ously you don’t have an is­sue. Some­times the boss ac­tu­ally just says “we’re do­ing it this way” and it doesn’t re­quire any ex­ten­sive model shar­ing.

If you’ve never run into the prob­lem of in­tractable-dis­agree­ment while col­lab­o­rat­ing on some­thing im­por­tant, this blog­post is not for you. (But, maybe keep it in the back of your mind in case you do run into such an is­sue)

But work­ing on the LessWrong team for about 1.5 years, I’ve run into nu­mer­ous deep dis­agree­ments, and my im­pres­sion is that such dis­agree­ments are com­mon – es­pe­cially in do­mains where you’re solv­ing a novel prob­lem. We’ve liter­ally ar­gued a bunch about min­i­mal­ism, which isn’t an es­pe­cially unique de­sign de­ci­sion. We’ve also had much weirder dis­agree­ments about in­tegrity and in­tel­lec­tual progress and AI timelines and more.

We’ve re­solved many (al­though not all) of those dis­agree­ments. In many cases, dou­ble­crux has been helpful as a frame­work.

What’s Dou­ble­crux again?

If you’ve made it this far, pre­sum­ably it seems use­ful to have some kind of pro­cess-for-con­sen­sus that works bet­ter than what­ever you and your col­leagues were do­ing by de­fault.

Desider­ata that I per­son­ally have for such a pro­cess:

  • Both par­ties can agree that it’s worth doing

  • It should save more time than it costs (or pro­duce value com­men­su­rate with the time you put in)

  • It works even when both par­ties have differ­ent frames or values

  • If nec­es­sary, it un­tan­gles con­fused ques­tions, and re­places them with bet­ter ones

  • If nec­es­sary, it un­tan­gles con­fused goals, and re­places them with bet­ter ones

  • If peo­ple are dis­agree­ing be­cause of aes­thetic differ­ences like “what it beau­tiful/​good/​ob­vi­ously-right”, it pro­vides a frame­work wherein peo­ple can ac­tu­ally change their mind about “what is beau­tiful and good and right.”

  • Ul­ti­mately, it lets you “get back to work”, and ac­tu­ally build the damn product, con­fi­dent that you are go­ing about it the right way.

[Many of these goals were not as­sump­tions I started with. They’re listed here be­cause I kept run­ning into failures re­lat­ing to each one. Over the past 2 years I’ve had some suc­cess with each of those points]

Im­por­tantly, it’s not nec­es­sar­ily needed for such a pro­cess to an­swer the origi­nal ques­tion you asked. In the con­text of build­ing a product, what’s im­por­tant is that you figure out a model of the world which you both agree on, which in­forms which ac­tions to take.

Dou­ble­crux is a frame­work that I’ve found helpful for the above con­cerns. But I think I’d con­sider it a win for this es­say if I’ve at least clar­ified why it’s de­sir­able to have some such sys­tem. I share Dun­can’s be­lief that it’s more promis­ing to re­pair or im­prove dou­ble­crux than to start from scratch. But if you’d rather start from scratch, that’s cool.

Com­po­nents of Dou­ble­crux – Cog­ni­tive Mo­tions vs Attitudes

There are two core con­cepts be­hind the dou­ble­crux frame­work:

  • A set of cog­ni­tive mo­tions:

    • Look­ing for the cruxes of your be­liefs, and ask­ing what em­piri­cal ob­ser­va­tions would change your mind about them. (Re­curs­ing un­til you find a crux you and your part­ner both share, the “dou­ble­crux”)

  • A set of attitudes

    • Epistemic hu­mil­ity

      • “maybe I’m the wrong one”

    • Good faith

      • “I trust my part­ner to be co­op­er­at­ing with me”

    • Belief that ob­jec­tive re­al­ity is real

      • “there’s an ac­tual right an­swer here, and it’s bet­ter for each of us if we’ve both found it”

    • Earnest curiosity

Of those, I think the set of at­ti­tudes is more im­por­tant than the cog­ni­tive mo­tions. If the “search for cruxes and em­piri­cal tests” thing isn’t work­ing, but you have the four at­ti­tudes, you can prob­a­bly find other ways to make progress. Mean­while, if you don’t each have those four at­ti­tudes, you don’t have the foun­da­tions nec­es­sary to dou­ble­crux.

Us­ing lan­guage for truth­seek­ing, not politics

But I think the cog­ni­tive mo­tions are helpful, for this rea­son: much of hu­man lan­guage is by de­fault poli­tics rather than truth­seek­ing. “Reg­u­lar de­bate” of­ten re­in­forces the use of lan­guage-as-poli­tics, which ac­tives brain mod­ules that are op­ti­miz­ing to win, which in­volves strate­gic blind­ness. (I mean some­thing a bit nu­anced by “poli­tics” here, be­yond scope of this post. But ba­si­cally, op­ti­miz­ing be­liefs and words for how you fit into the so­cial land­scape, rather than op­ti­miz­ing for what cor­re­sponds to ob­jec­tive re­al­ity).

The “search for em­piri­cal tests is and cruxes-of-be­liefs” mo­tion is de­signed to keep each par­ti­ci­pant’s brain in a “lan­guage-as-truth­seek­ing” mode. If you’re ask­ing your­self “why would I change my mind?”, it’s more nat­u­ral to be hon­est to your­self and your part­ner than if you’re ask­ing “how can I change their mind?”

Mean­while, the fo­cus on mu­tual, op­pos­ing cruxes keeps things fruit­ful. Disagree­ment is more in­ter­est­ing and use­ful than agree­ment – it pro­vides an op­por­tu­nity to ac­tu­ally learn. If peo­ple are do­ing lan­guage-as-poli­tics, then dis­agree­ment is a red flag that you are on op­pos­ing sides and might be threat­en­ing each other (which might ei­ther prompt you to fight, or prompt you to “agree to dis­agree”, pre­serv­ing the so­cial fabric by sweep­ing the prob­lem un­der the rug).

But if you can both trust that ev­ery­one’s truth­seek­ing, then you can drill di­rectly into dis­agree­ments with­out wor­ry­ing about that, op­ti­miz­ing for learn­ing, and then for build­ing a shared model that lets you ac­tu­ally make progress on your product.

Trig­ger Ac­tion Plans

Know­ing this is all well and good, but what might this trans­late into in terms of ac­tions?

If hap­pen to have a live dis­agree­ment right now, maybe you can try dou­ble­crux. But if not, what cir­cum­stances should prompt

I’ve found the “Trig­ger Ac­tion Plan” frame­work use­ful for this sort of thing, as a ba­sic ra­tio­nal­ity build­ing-block skill. If you no­tice an un­helpful con­ver­sa­tional pat­tern, you can build an as­so­ci­a­tion where you take some par­tic­u­lar ac­tion that seems use­ful in that cir­cum­stance. (Some­times, the generic trig­ger-ac­tion of “no­tice some­thing un­helpful is hap­pen­ing ----> stop and think” is good enough)

In this case, a trig­ger-ac­tion I’ve found use­ful is:

TRIGGER: No­tice that we’ve been ar­gu­ing awhile, and some­one has just re­peated the same ar­gu­ment they said a lit­tle while ago (for the sec­ond, or es­pe­cially third time)

ACTION: Say some­thing like: “Hey, I no­tice that we’ve been re­peat­ing our­selves a bit. I feel like con­ver­sa­tion is kinda go­ing in cir­cles....” fol­lowed by ei­ther “Would you be up for try­ing to for­mally dou­ble­crux about this?” or fol­low­ing Dun­can’s va­guer sug­ges­tions about how to unilat­er­ally im­prove a con­ver­sa­tion (de­pend­ing on how much shared con­text you and your part­ner have).


  • In­tractable dis­agree­ments don’t always mat­ter. But if you’re try­ing to build some­thing to­gether, and dis­agree­ing sub­stan­tially about how to go about it, you will need some way to re­solve that dis­agree­ment.

  • Hier­ar­chy can ob­vi­ate the need for re­s­olu­tion if the dis­agree­ment is sim­ple, and if ev­ery­one agrees to re­spect the boss’s de­ci­sion.

  • If the dis­agree­ment has per­sisted awhile and it’s still wast­ing mo­tion, at the very least it’s prob­a­bly use­ful to do some­thing differ­ently. In par­tic­u­lar, if you’ve been re­peat­ing the

  • Dou­ble­crux is a par­tic­u­lar frame­work I’ve found helpful for re­solv­ing in­tractable dis­agree­ments (when they are im­por­tant enough to in­vest se­ri­ous en­ergy and time into). It fo­cuses the con­ver­sa­tion into “truth­seek­ing” mode, and in par­tic­u­lar strives to avoid “poli­ti­cal mode”