Connecting Your Beliefs (a call for help)

A cou­ple weeks af­ter meet­ing me, Will New­some gave me one of the best com­pli­ments I’ve ever re­ceived. He said: “Luke seems to have two copies of the Take Ideas Se­ri­ously gene.”

What did Will mean? To take an idea se­ri­ously is “to up­date a be­lief and then ac­cu­rately and com­pletely prop­a­gate that be­lief up­date through the en­tire web of be­liefs in which it is em­bed­ded,” as in a Bayesian be­lief net­work (see right).

Belief prop­a­ga­tion is what hap­pened, for ex­am­ple, when I first en­coun­tered that thun­der­ing para­graph from I.J. Good (1965):

Let an ul­train­tel­li­gent ma­chine be defined as a ma­chine that can far sur­pass all the in­tel­lec­tual ac­tivi­ties of any man how­ever clever. Since the de­sign of ma­chines is one of these in­tel­lec­tual ac­tivi­ties, an ul­train­tel­li­gent ma­chine could de­sign even bet­ter ma­chines; there would then un­ques­tion­ably be an “in­tel­li­gence ex­plo­sion,” and the in­tel­li­gence of man would be left far be­hind… Thus the first ul­train­tel­li­gent ma­chine is the last in­ven­tion that man need ever make.

Good’s para­graph ran me over like a train. Not be­cause it was ab­surd, but be­cause it was clearly true. In­tel­li­gence ex­plo­sion was a di­rect con­se­quence of things I already be­lieved, I just hadn’t no­ticed! Hu­mans do not au­to­mat­i­cally prop­a­gate their be­liefs, so I hadn’t no­ticed that my wor­ld­view already im­plied in­tel­li­gence ex­plo­sion.

I spent a week look­ing for coun­ter­ar­gu­ments, to check whether I was miss­ing some­thing, and then ac­cepted in­tel­li­gence ex­plo­sion to be likely (so long as sci­en­tific progress con­tinued). And though I hadn’t read Eliezer on the com­plex­ity of value, I had read David Hume and Joshua Greene. So I already un­der­stood that an ar­bi­trary ar­tifi­cial in­tel­li­gence would al­most cer­tainly not share our val­ues.

Ac­cept­ing my be­lief up­date about in­tel­li­gence ex­plo­sion, I prop­a­gated its im­pli­ca­tions through­out my web of be­liefs. I re­al­ized that:

  • Things can go very wrong, for we live in a world be­yond the reach of God.

  • Scien­tific progress can de­stroy the world.

  • Strong tech­nolog­i­cal de­ter­minism is true; purely so­cial fac­tors will be swamped by tech­nol­ogy.

  • Writ­ing about philos­o­phy of re­li­gion was not im­por­tant enough to con­sume any more of my time.

  • My high­est-util­ity ac­tions are ei­ther those that work to­ward re­duc­ing AI risk, or those that work to­ward mak­ing lots of money so I can donate to AI risk re­duc­tion.

  • Mo­ral the­ory is not idle spec­u­la­tion but an ur­gent en­g­ineer­ing prob­lem.

  • Tech­nolog­i­cal utopia is pos­si­ble, but un­likely.

  • The value of in­for­ma­tion con­cern­ing in­tel­li­gence ex­plo­sion sce­nar­ios is ex­tremely high.

  • Ra­tion­al­ity is even more im­por­tant than I already be­lieved it was.

  • and more.

I had en­coun­tered the I.J. Good para­graph on Less Wrong, so I put my other pro­jects on hold and spent the next month read­ing al­most ev­ery­thing Eliezer had writ­ten. I also found ar­ti­cles by Nick Bostrom and Steve Omo­hun­dro. I be­gan writ­ing ar­ti­cles for Less Wrong and learn­ing from the com­mu­nity. I ap­plied to Sin­gu­lar­ity In­sti­tute’s Visit­ing Fel­lows pro­gram and was ac­cepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started col­lect­ing re­search re­lated to ra­tio­nal­ity and in­tel­li­gence ex­plo­sion.

My story sur­prises peo­ple be­cause it is un­usual. Hu­man brains don’t usu­ally prop­a­gate new be­liefs so thor­oughly.

But this isn’t just an­other post on tak­ing ideas se­ri­ously. Will already offered some ideas on how to prop­a­gate be­liefs. He also listed some ideas that most peo­ple prob­a­bly aren’t tak­ing se­ri­ously enough. My pur­pose here is to ex­am­ine one pre­req­ui­site of suc­cess­ful be­lief prop­a­ga­tion: ac­tu­ally mak­ing sure your be­liefs are con­nected to each other in the first place.

If your be­liefs aren’t con­nected to each other, there may be no paths along which you can prop­a­gate a new be­lief up­date.

I’m not talk­ing about the prob­lem of free-float­ing be­liefs that don’t con­trol your an­ti­ci­pa­tions. No, I’m talk­ing about “proper” be­liefs that re­quire ob­ser­va­tion, can be up­dated by ev­i­dence, and pay rent in an­ti­ci­pated ex­pe­riences. The trou­ble is that even proper be­liefs can be in­ad­e­quately con­nected to other proper be­liefs in­side the hu­man mind.

I wrote this post be­cause I’m not sure what the “mak­ing sure your be­liefs are ac­tu­ally con­nected in the first place” skill looks like when bro­ken down to the 5-sec­ond level.

I was chat­ting about this with atucker, who told me he no­ticed that suc­cess­ful busi­ness­men may have this trait more of­ten than oth­ers. But what are they do­ing, at the 5-sec­ond level? What are peo­ple like Eliezer and Carl do­ing? How does one en­gage in the pur­pose­ful de­com­part­men­tal­iza­tion of one’s own mind?