Internalizing Internal Double Crux

In sci­ences such as psy­chol­ogy and so­ciol­ogy, in­ter­nal­iza­tion in­volves the in­te­gra­tion of at­ti­tudes, val­ues, stan­dards and the opinions of oth­ers into one’s own iden­tity or sense of self.

In­ter­nal Dou­ble Crux is one of the most im­por­tant skills I’ve ever learned. In the last two weeks, I’ve solved some se­ri­ous, long-stand­ing prob­lems with IDC (per­ma­nently, as far as I can tell, and of­ten in less than 5 min­utes), a small sam­ple of which in­cludes:

  • Belief that I have in­trin­si­cally less worth than others

  • Belief that oth­ers are in­trin­si­cally less likely to want to talk to me

  • Belief that at­ten­dance at events I host is di­rectly tied to my worth

  • Dispro­por­tionately nega­tive re­ac­tion to be­ing stood up

  • Long-stand­ing pho­bia of bees and flies

I feel great, and I love it. Ac­tu­ally, most of the time I don’t feel amaz­ingly con­fi­dent—I just feel not bad in lots of situ­a­tions. Ap­par­ently this level of suc­cess with IDC across such a wide range of prob­lems is un­usual. Some ad­vice, and then an ex­am­ple.

  • The emo­tional tex­ture of the di­alogue is of paramount im­por­tance. There should be a warm feel­ing be­tween the two sides, as if they were two best friends who are up­set with each other, but also se­cretly ap­pre­ci­ate each other and want to make things right.

    • Each re­sponse should start with a sincere and emo­tional val­i­da­tion of some as­pect of the other side’s con­cern. In my ex­pe­rience, this feels like emo­tional ping pong.

    • For me, re­s­olu­tion of the is­sue is ac­com­panied by a warm feel­ing that rises to my throat in a bub­ble-ish way. My heart also feels full. This is similar to (but dis­tinct from) the ‘aww’ feel­ing you may ex­pe­rience when you see cute an­i­mals.

  • Fo­cus­ing is an im­por­tant (and prob­a­bly nec­es­sary) sub-skill.

  • Don’t in­ter­rupt or oth­er­wise ob­struct one of your voices be­cause it’s “stupid” or “talked long enough”—be re­spect­ful. The out­come should not feel pre-or­dained—you should be hav­ing two of your sub-agents /​ iden­tities shar­ing their emo­tional and men­tal mod­els to come to a fixed point of har­mo­nious agree­ment.

  • Some be­liefs aren’t ex­plic­itly ad­vo­cated by any part of you, and are in­stead propped up by cer­tain mem­o­ries. You can use Fo­cus­ing to hone in on the mem­o­ries, and then em­ploy IDC to re­solve your on­go­ing re­ac­tion to it.

  • Most im­por­tantly, the ar­gu­ments be­ing made should be emo­tion­ally salient and not just de­tached, “empty” words. In my ex­pe­rience, if I’m to­tally “in my head”, any mod­ifi­ca­tion of my Sys­tem 1 feel­ings is im­pos­si­ble.

Note: this en­tire ex­change took place in­ter­nally over the course of 2 min­utes, via a 50-50 mix of words and emo­tions. Un­pack­ing it took sig­nifi­cantly longer.

I may write more of these if this is helpful for peo­ple.


If I don’t get this CHAI in­tern­ship, I’m go­ing to feel ter­rible, be­cause that means I don’t have much promise as an AI safety re­searcher.

Real­ist: Not get­ting the in­tern­ship is mod­er­ate Bayesian ev­i­dence that you’re mis­cal­ibrated on your po­ten­tial. Some­one promis­ing enough to even­tu­ally be­come a MIRI re­searcher would be able to snag this, no prob­lem. I feel wor­ried that we’re poorly cal­ibrated and set­ting our­selves up for dis­ap­point­ment when we fall short.

Fire: I agree that not get­ting the in­tern­ship would be fairly di­rect Bayesian ev­i­dence that there are oth­ers who are more promis­ing right now. I think, how­ever, that you’re miss­ing a few key points here:

  • We’ve made im­por­tant con­nec­tions at CHAI /​ MIRI.

  • Your main point is a to­tal buck­ets er­ror. There is no on­tolog­i­cally-ba­sic and im­mutable “promis­ing-in­di­vi­d­ual” prop­erty. Granted, there are biolog­i­cal and en­vi­ron­men­tal fac­tors out­side our con­trol here, but I think we score high enough on these met­rics to be able to suc­ceed through effort, pas­sion, and in­creased mas­tery of in­stru­men­tal ra­tio­nal­ity.

  • We’ve been study­ing AI safety for just a few months (in our free time, no less); most of the study­ing has been ded­i­cated to­wards build­ing up foun­da­tional skills (and not re­view­ing the liter­a­ture it­self). The ap­pli­cants who are cho­sen may have a year or more of fa­mil­iar­ity with the liter­a­ture /​ rele­vant math on us (or per­haps not), and this should be in­cluded in the model.

  • One of the main stick­ing points raised dur­ing my fi­nal in­ter­view has since been fixed, but I couldn’t sig­nal that af­ter­wards with­out seem­ing over­bear­ing.

I guess the main thrust here is that al­though that would be a data point against our be­ing able to have a tec­tonic im­pact right now, we sim­ply don’t have enough ev­i­dence to re­spon­si­bly gen­er­al­ize. I’m wor­ried that you’re overly pes­simistic, and it’s pul­ling down our chances of ac­tu­ally be­ing able to do some­thing.

Real­ist: I definitely hear you that we’ve made lots of great progress, but is it enough? I’m so ner­vous about timelines, and the uni­verse isn’t mag­i­cally cal­ibrated to what we can do now.* We ei­ther suc­ceed, or we don’t—and pay the price. Do we re­ally have time to tol­er­ate al­most be­ing ex­traor­di­nary? How is that go­ing to do the im­pos­si­ble? I’m scared.

Fire: Yup. I’m definitely scared too (in a sense), but also ex­cited. This is a great chance to learn, grow, have fun, and work with peo­ple we re­ally ad­mire and ap­pre­ci­ate! Let’s de­tach the grim-o-me­ter, since that strat­egy seems to strictly dom­i­nate be­ing wor­ried and in­se­cure about whether we’re do­ing enough.

Real­ist: I agree that de­tach­ing the grim-o-me­ter is the right thing to do, but… it makes me feel guilty.* I guess there’s a part of me that be­lieves that feel­ing bad when things could go re­ally wrong is im­por­tant.

Con­cern: Hey, that’s me! Yeah, I’m re­ally wor­ried that if we de­tach that grim-o-me­ter, we’ll be­come cal­lous and flip­pant and care­free. I don’t know if that’s a rea­son­able con­cern, but the prospect makes me feel re­ally queasy. Shouldn’t we be re­ally wor­ried?

Real­ist: Ac­tu­ally, I don’t know. Fire made a good point—the world will prob­a­bly end up slightly bet­ter if we don’t care about the grim-o-me­ter…

Fire: Hell yeah it will! What are we op­ti­miz­ing for here—an ar­bi­trary de­on­tolog­i­cal rule about feel­ing bad, or the ac­tual world-state? Fur­ther­more, we aren’t dis­card­ing moral­ity—we’re dis­card­ing the idea that we should worry when the world is in a prob­a­bly-pre­car­i­ous po­si­tion. We’ll still fight just as hard.

* No­tice how re­lated cruxes can (and should) be re­solved in the same ses­sion. Re­s­olu­tion can­not hap­pen if any part of you isn’t fully on board with what­ever agree­ment you’ve come to—this feels like a small empti­ness in the pit of my stom­ach, in my ex­pe­rience.