Leveling Up in Rationality: A Personal Journey

See also: Reflec­tions on ra­tio­nal­ity a year out

My fa­vorite part of Lord of the Rings was skipped in both film adap­ta­tions. It oc­curs when our four hob­bit heroes (Sam, Frodo, Merry and Pip­pin) re­turn to the Shire and learn it has been taken over by a gang of ruffi­ans. Merry as­sumes Gan­dalf will help them free their home, but Gan­dalf de­clines:

I am not com­ing to the Shire. You must set­tle its af­fairs your­selves; that is what you have been trained for… My dear friends, you will need no help. You are grown up now. Grown in­deed very high...

As it turns out, the hob­bits have ac­quired many pow­ers along their jour­ney — pow­ers they use to lead a re­sis­tance and free the Shire.

That is how I felt when I flew home for the holi­days this De­cem­ber. Min­nesota wasn’t ruled by ruffi­ans, but the fa­mil­iar faces and places re­minded me of the per­son I had been be­fore I moved away, just a few years ago.

And I’m just so much more pow­er­ful than I used to be.

And in my case, at least, many of my newfound pow­ers seem to come from hav­ing se­ri­ously lev­eled up in ra­tio­nal­ity.

Power 0: Curiosity

I was always “cu­ri­ous,” by which I mean I felt like I wanted to know things. I read lots of books and asked lots of ques­tions. But I didn’t re­ally want to know the truth, be­cause I didn’t care enough about the truth to study, say, prob­a­bil­ity the­ory and the cog­ni­tive sci­ence of how we de­ceive our­selves. I just stud­ied differ­ent Chris­tian the­olo­gies — and, when I was re­ally dar­ing, differ­ent su­per­nat­u­ral re­li­gions — and told my­self that was what hon­est truth-seek­ing looked like.

It took 20 years for re­al­ity to pierce my com­fortable, care­fully cul­ti­vated bub­ble of Chris­tian in­doc­tri­na­tion. But when it fi­nally popped, I re­al­ized I had (mostly) wasted my life thus far, and I was an­gry. Now I stud­ied things not just for the plea­sure of dis­cov­ery and the grat­ify­ing feel­ing of car­ing about truth, but be­cause I re­ally wanted an ac­cu­rate model of the world so I wouldn’t do stupid things like waste two decades of life.

And it was this cu­ri­os­ity, more than any­thing else, that led to ev­ery­thing else. So long as I burned for re­al­ity, I was bound to level up.

Power 1: Belief Propagation

One fac­tor that helped re­li­gion cling to me for so long was my abil­ity to com­part­men­tal­ize, to shield cer­tain parts of my be­liefs from at­tack, to ap­ply differ­ent stan­dards to differ­ent be­liefs like the sci­en­tist out­side the lab­o­ra­tory. When gen­uine cu­ri­os­ity tore down those walls, it didn’t take long for the im­pli­ca­tions of my athe­ism to prop­a­gate. I no­ticed that con­tra-causal free will made no sense for the same rea­sons God made no sense. I no­ticed that what­ever value ex­isted in the uni­verse was made of atoms. I as­sumed the ba­sics of tran­shu­man­ism with­out know­ing there was a thing called “tran­shu­man­ism.” I no­ticed that minds didn’t need to be made of meat, and that ma­chines could be made more moral than hu­mans. (I called them “ar­tifi­cial su­per­brains” at the time.) I no­ticed that sci­en­tific progress could ac­tu­ally be bad, be­cause it’s eas­ier to de­stroy the world than to pro­tect it. I also no­ticed we should there­fore “en­courage sci­en­tific re­search that saves and pro­tects lives, and dis­cour­age sci­en­tific re­search that may de­stroy us” — and this was be­fore I had read about ex­is­ten­tial risk and “differ­en­tial tech­nolog­i­cal de­vel­op­ment.”

Some­how, I didn’t no­tice that nat­u­ral­ism + sci­en­tific progress also im­plied in­tel­li­gence ex­plo­sion. I had to read that one. But when I did, it set off an­other round of rapid be­lief up­dates. I no­ticed that the en­tire world could be lost, that moral the­ory was an ur­gent en­g­ineer­ing prob­lem, that tech­nolog­i­cal utopia is ac­tu­ally pos­si­ble (how­ever un­likely), and more.

The power of be­lief prop­a­ga­tion gives me clar­ity of thought and co­her­ence of ac­tion. My ac­tions are now less likely to be in­formed by mul­ti­ple in­com­pat­i­ble be­liefs, though this still oc­curs some­times due to cached thoughts.

Power 2: Scholarship

I was always one to look things up, but be­fore my de­con­ver­sion my schol­ar­ship heuris­tic seems to have been “Find some­thing that shares most of my as­sump­tions and tells me roughly what I want to hear, filled with lots of ev­i­dence to re­as­sure me of my opinion.” That’s not what I thought I was do­ing at the time, but look­ing back at my read­ing choices, that’s what it looks like I was do­ing.

After be­ing taken by gen­uine cu­ri­os­ity, my heuris­tic be­came some­thing more like “Check what the main­stream sci­en­tific con­sen­sus is on the sub­ject, along with the ma­jor al­ter­na­tive views and most com­mon crit­i­cisms.” Later, I added qual­ifi­ca­tions like “But watch out for signs that an en­tire field of in­quiry is fun­da­men­tally un­sound.”

The power of look­ing shit up proved to have enor­mous prac­ti­cal value. How could I make Com­mon Sense Athe­ism pop­u­lar, quickly? I stud­ied how to build blog traf­fic, ap­plied the ma­jor les­sons, and within 6 months I had one of the most pop­u­lar athe­ism blogs on the in­ter­net. How could I im­prove my suc­cess with women? I skim-read dozens of books on the sub­ject, filtered out the best ad­vice, ap­plied it (af­ter much trep­i­da­tion), and even­tu­ally had enough suc­cess that I didn’t need to worry about it any­more. What are val­ues, and how do they work? My search lead me from philos­o­phy to af­fec­tive neu­ro­science and fi­nally to neu­roe­co­nomics, where I hit the jack­pot and wrote A Crash Course in the Neu­ro­science of Hu­man Mo­ti­va­tion. How could I be hap­pier? I stud­ied the sci­ence of hap­piness, ap­plied its les­sons, and went from oc­ca­sion­ally suici­dal to sta­bly happy. How could I make the Sin­gu­lar­ity In­sti­tute more effec­tive? I stud­ied non-profit man­age­ment and fundrais­ing, and am cur­rently (with lots of help) do­ing quite a lot to make the or­ga­ni­za­tion more effi­cient and cred­ible.

My most use­ful schol­ar­ship win had to do with beat­ing akra­sia. Eliezer wrote a post about pro­cras­ti­na­tion that drew from per­sonal anec­dote but not a sin­gle ex­per­i­ment. This prompted me to write my first post, which sug­gested he ought to have done a bit of re­search on pro­cras­ti­na­tion, so he could stand on the shoulders of gi­ants. A sim­ple Google scholar search on “pro­cras­ti­na­tion” turned up a re­cent “meta-an­a­lytic and the­o­ret­i­cal re­view” of the field as the 8th re­sult, which pointed me to the re­sources I used to write How to Beat Pro­cras­ti­na­tion. Mas­ter­ing that post’s al­gorithm for beat­ing akra­sia might be the most use­ful thing I’ve ever done, since it em­pow­ers ev­ery­thing else I try to do.

Power 3: Act­ing on Ideas

Another les­son from my re­li­gious de­con­ver­sion was that ab­stract ideas have con­se­quences. Be­cause of my be­lief in the su­per­nat­u­ral, I had spent 20 years (1) study­ing the­ol­ogy in­stead of math and sci­ence, (2) avoid­ing sex­ual re­la­tion­ships, and (3) train­ing my­self in fan­tasy-world “skills” like prayer and “sens­ing the Holy Spirit.” If I wanted to benefit from hav­ing a more ac­cu­rate model of the world as much as I had been harmed by hav­ing a false model, I’d need to ac­tu­ally act in re­sponse to the most prob­a­ble mod­els of the world I could con­struct.

Thus, when I re­al­ized I didn’t like the Min­nesota cold and could be happy with­out see­ing my friends and fam­ily that of­ten, I threw all my be­long­ings in my car and moved to Cal­ifor­nia. When I came to take in­tel­li­gence ex­plo­sion se­ri­ously, I quit my job in L.A., moved to Berkeley, in­terned with the Sin­gu­lar­ity In­sti­tute, worked hard, got hired as a re­searcher, and was later ap­pointed Ex­ec­u­tive Direc­tor.

Win­ning with Rationality

Th­ese are just a few of my ra­tio­nal­ity-pow­ers. Yes, I could have got­ten these pow­ers an­other way, but in my case they seemed to flow largely from that first virtue of ra­tio­nal­ity: gen­uine cu­ri­os­ity. Yes, I’ve com­pressed my story and made it sound less messy than it re­ally was, but I do be­lieve I’ve been gain­ing in ra­tio­nal­ist power — the power of agency, sys­tem­atized win­ning — and that my life is much bet­ter as a re­sult. And yes, most peo­ple won’t get these re­sults, due to things like akra­sia, but maybe if we figure out how to teach the un­teach­able, those chains won’t hold us any­more.

What does a Level 60 ra­tio­nal­ist look like? Maybe Eliezer Yud­kowsky + Tim Fer­ris? That sounds like a wor­thy goal! A few dozen peo­ple that pow­er­ful might be able to, like, save the world or some­thing.