How I’d Introduce LessWrong to an Outsider

Note/​edit: I’m imag­in­ing ex­plain­ing this to a friend or fam­ily mem­ber who is at least some­what char­i­ta­ble and trust­ing of my judge­ment. I am not imag­in­ing sim­ply putting this on the About page. I should have made this clear from the be­gin­ning—my bad. How­ever, I do be­lieve that some (but not all) of the de­sign de­ci­sions would be effec­tive on some­thing like the About page as well.

There’s this guy named Eliezer Yud­kowsky. He’s re­ally, re­ally smart. He founded MIRI, wrote a pop­u­lar fan­fic of Harry Pot­ter that cen­ters around ra­tio­nal­ity, and has a par­tic­u­larly strong back­ground in AI, prob­a­bil­ity the­ory, and de­ci­sion the­ory. There’s an­other guy named Robin Han­son. Han­son is an eco­nomics pro­fes­sor at Ge­orge Ma­son, and has a back­ground in physics, AI and statis­tics. He’s also re­ally, re­ally smart.

Yud­kowsky and Han­son started a blog called Over­com­ing Bias in Novem­ber of 2006. They blogged about ra­tio­nal­ity. Later on, Yud­kowsky left Over­com­ing Bias and started his own blog—LessWrong.

What is ra­tio­nal­ity? Well, for starters, it’s in­cred­ibly in­ter­dis­ci­plinary. It in­volves aca­demic fields like prob­a­bil­ity the­ory, de­ci­sion the­ory, logic, evolu­tion­ary psy­chol­ogy, cog­ni­tive bi­ases, lots of philos­o­phy, and AI. The goal of ra­tio­nal­ity is to help you be right about the things you be­lieve. In other words, the goal of ra­tio­nal­ity is to be wrong less of­ten. To be LessWrong.

Weird? Use­ful?

LessWrong may seem fringe-y and cult-y, but the teach­ings are usu­ally things that aren’t con­tro­ver­sial at all. Again, ra­tio­nal­ity teaches you things like prob­a­bil­ity the­ory and evolu­tion­ary psy­chol­ogy. Things that aca­demics all agree on. Things that aca­demics have stud­ied pretty thor­oughly. Some­times the find­ings haven’t made it to main­stream cul­ture yet, but they’re al­most always things that the ex­perts all agree on and con­sider to be pretty ob­vi­ous. Th­ese aren’t some weird nerds cooped up in their par­ents base­ment preach­ing crazy ideas they came up with. Th­ese are early adopters who are tak­ing things that have already been dis­cov­ered, bring­ing them to­gether, and show­ing us how the find­ings could help us be wrong less fre­quently.

Ra­tion­al­ists tend to be a lit­tle “weird” though. And they tend to be­lieve a lot of “weird” things. A lot of sci­ence-fic­tion-y things. They be­lieve we’re go­ing to blend with robots and be­come tran­shu­mans soon. They be­lieve that we may be able to freeze our­selves be­fore we die, and then be re­vived by fu­ture gen­er­a­tions. They be­lieve that we may be able to up­load our con­scious­ness to a com­puter and live as a simu­la­tion. They be­lieve that com­put­ers are go­ing to be­come su­per pow­er­ful and com­pletely take over the world.

Per­son­ally, I don’t un­der­stand these things well enough to re­ally speak to their plau­si­bil­ity. My im­pres­sion so far is that ra­tio­nal­ists have very good rea­sons for be­liev­ing what they be­lieve, and that they’re prob­a­bly right. But per­haps you don’t share this im­pres­sion. Per­haps you think those con­clu­sions are wacky and ridicu­lous. Even if you think this, it’s still pos­si­ble that the tech­niques may be use­ful to you, right? It’s pos­si­ble that ra­tio­nal­ists have mis­ap­plied the tech­niques in some ways, but that if you learn the tech­niques and add them to your ar­se­nal, they’ll help you level up. Con­sider this be­fore writ­ing ra­tio­nal­ity off as wacky.


So, what does ra­tio­nal­ity teach you? Here’s my overview:

  • The differ­ence be­tween re­al­ity, and our mod­els of re­al­ity (see map vs. ter­ri­tory).

  • That things are there com­po­nents. Air­planes are made up of quarks. “Air­plane” is a con­cept we cre­ated to model re­al­ity.

  • To think in gray. To say, “I sense that x is true” or “I’m pretty sure that x is true” in­stead of “X is true”.

  • To up­date your be­liefs in­cre­men­tally. To say, “I still don’t think X is true, but now that you’ve showed me Y, I’m some­what less con­fi­dent.” On the other hand, a Black And White Thinker would say, “Eh, even though you showed me Y, I still just don’t think X is true.”

  • How much we should ac­tu­ally up­date our be­liefs when we come across a new ob­ser­va­tion. A lit­tle? A lot? Bayes’ the­o­rem has the an­swers. It is a fun­da­men­tal com­po­nent of ra­tio­nal­ity.

  • That sci­ence, as an in­sti­tu­tion, pre­vents you from up­dat­ing your be­liefs quickly enough. Why? Be­cause it re­quires a lot of good data be­fore you’re al­lowed to up­date your be­liefs at all. Even just a lit­tle bit. Of course you shouldn’t up­date too much with bad data, but you should still nudge your be­liefs a bit in the di­rec­tion that the data point to­ward.

  • To make your be­liefs about things that are ac­tu­ally ob­serv­able. Think: if a tree falls in a for­est and no one hears it, does it make a sound? Ad­ding this tech­nique to your ar­se­nal will help you make sense of a lot of philo­soph­i­cal dilem­mas.

  • To make de­ci­sions based on con­se­quences. To dis­t­in­guish be­tween your end goal, and the step­ping stones you must pass on your way there. Peo­ple of­ten for­get what it is that they are ac­tu­ally pur­su­ing, and get tricked into pur­su­ing the step­ping stones alone. Ex. get­ting too caught up mov­ing up the ca­reer lad­der.

  • How evolu­tion re­ally works, and how it helps ex­plain why we are the way we are to­day. Hint: it’s slow and stupid.

  • How quan­tum physics re­ally works.

  • How words can be wrong.

  • Utili­tar­ian ethics.

  • That you have A LOT of bi­ases. And that by un­der­stand­ing them, you could side-step the pain that they would oth­er­wise have caused you.

  • Similarly, that you have A LOT of “failure modes”, and that by un­der­stand­ing them, you could side-step a lot of the pain that they would oth­er­wise have caused you.

  • Lots of healthy mind­sets you should take. For ex­am­ple:

    • Tsuyoku Naratai—“I want to be­come stronger!”

    • No­tice when you’re con­fused.

    • Rec­og­nize that be­ing wrong is ex­cit­ing, and some­thing you should em­brace—it means you are about to learn some­thing new and level up!

    • Don’t just be­lieve the op­po­site of what your stupid op­po­nent be­lieves out of frus­tra­tion and spite. Some­times they’re right for the wrong rea­sons. Some­times there’s a third al­ter­na­tive you’re not con­sid­er­ing.

    • To give some­thing a fair chance, be sure to think about it for five min­utes by the clock.

    • When you’re wrong, scream “OOPS!”. That way, you could just move on in the right di­rec­tion im­me­di­ately. Don’t just make minor con­ces­sions and ra­tio­nal­ize why you were only par­tially wrong.

    • Don’t be con­tent with just try­ing. You’ll give up too early if you do that.

    • “Im­pos­si­ble” things are of­ten not ac­tu­ally im­pos­si­ble. Con­sider how im­pos­si­ble wire­less com­mu­ni­ca­tion would seem to some­one who lived 500 years ago. Try study­ing some­thing for a year or five be­fore you claim that it is im­pos­si­ble.

    • Don’t say things to sound cool, say them be­cause they’re true. Don’t be overly hum­ble. Don’t try to sound wise by be­ing overly neu­tral and cau­tious.

    • Mere re­al­ity” is ac­tu­ally pretty awe­some. You could vibrate air molecules in an ex­tremely, ex­tremely pre­cise way, such that you could take the con­tents of your mind and put them in­side an­other per­sons mind? What???? Yeah. It’s called talk­ing.

    • Shut up and calcu­late. Some­times things aren’t in­tu­itive, and you just have to trust the math.

    • It doesn’t mat­ter how good you are rel­a­tive to oth­ers, it mat­ters how good you are in an ab­solute sense. Real­ity doesn’t grade you on a curve.

Sound in­ter­est­ing? Good! It is!

Eliezer wrote about all of this stuff in bite sized blog posts. About one per day. He claims it helps him write faster as op­posed to writ­ing one big book. Origi­nally, the col­lec­tion of posts were referred to as The Se­quences, and were or­ga­nized into cat­e­gories. More re­cently, the posts were re­fined and brought to­gether into a book—Ra­tion­al­ity: From AI to Zom­bies.

Per­son­ally, I be­lieve the writ­ing is dense and difficult to fol­low. Things like AI are of­ten used as ex­am­ples in places where a more ac­cessible ex­am­ple could have been used in­stead. Eliezer him­self con­fesses that he needs to “aim lower”. Still, the con­tent is awe­some, in­sight­ful, and use­ful, so if you could make your way past some of the less clear ex­pla­na­tions, I think you have a lot to gain. Per­son­ally, I find the Wiki and the ar­ti­cle sum­maries to be in­cred­ibly use­ful. There’s also HPMOR—a fan­fic Eliezer wrote to de­scribe the teach­ings of ra­tio­nally in a more ac­cessible way.


So far, there hasn’t been enough of a fo­cus on ap­ply­ing ra­tio­nal­ity to help you win in ev­ery­day life. In­stead, it’s been fo­cus­ing on solv­ing big, difficult, the­o­ret­i­cal prob­lems. Eliezer men­tions this in the pref­ace of Ra­tion­al­ity: From AI to Zom­bies. Devel­op­ing the more prac­ti­cal, ap­plied part of The Art is definitely some­thing that needs to be done.

Learn­ing how to ra­tio­nally work in groups is an­other thing that re­ally needs to be done. Un­for­tu­nately, ra­tio­nal­ists aren’t par­tic­u­larly good at work­ing to­gether. Yet.


From 2009-2014 (ex­clud­ing 2010), there were sur­veys of the LessWrong read­er­ship. There were usu­ally about 1,500 re­spon­ders, which tells you some­thing about the size of the com­mu­nity (note that there are peo­ple who read/​lurk/​com­ment, but who didn’t sub­mit the sur­vey). Read­ers live through­out the globe, and tend to be come from the athe­ist/​liber­tar­ian/​technophile/​sf-fan/​early-adopter/​pro­gram­mer/​etc. crowd. There are also a lot of effec­tive al­tru­ists—peo­ple who try to do good for the world, and who try to do so as effi­ciently as pos­si­ble. See the wiki’s FAQ for re­sults of these sur­veys.

There are meet-ups in many cities, and in many coun­tries. Berkeley is con­sid­ered to be the “hub”. See How to Run a Suc­cess­ful LessWrong Meetup for a sense of what these meet-ups are like. Ad­di­tion­ally, there is a Slack group, and an on­line study hall. Both are pretty ac­tive.

Com­mu­nity mem­bers mostly agree with the ma­te­rial de­scribed in The Se­quences. This com­mon jump­ing off point makes com­mu­ni­ca­tion smoother and more pro­duc­tive. And of­ten more fulfilling.

The cul­ture amongst LessWron­gians is some­thing that may take some get­ting used to. Com­mu­nity mem­bers tend to:

  • Be polyamorous.

  • Drink Soylent.

  • Com­mu­ni­cate ex­plic­itly. Eg. “I’m be­gin­ning to find this con­ver­sa­tion aver­sive, and I’m not sure why. I pro­pose we hold off un­til I’ve figured that out.”

  • Be a bit so­cially awk­ward (about 14 are on the autism spec­trum).

  • Use lots of odd ex­pres­sions.

In ad­di­tion… they’re to­tally awe­some! In my ex­pe­rience, I’ve found them to be par­tic­u­larly, car­ing, al­tru­is­tic, em­pa­thetic, open-minded, good at com­mu­ni­cat­ing, hum­ble, in­tel­li­gent, in­ter­est­ing, rea­son­able, hard work­ing, re­spect­ful and hon­est. Those are the kinds of peo­ple I’d like to spend my time amongst.


LessWrong isn’t nearly as ac­tive as it used to be. In “the golden era”, Eliezer along with a group of other core con­trib­u­tors would post in­sight­ful things many times each week. Now, these core con­trib­u­tors have fled to work on their own pro­jects and do their own things. There is much less post­ing on less­ than there used to be, but there is still some. And there is still re­lated ac­tivity el­se­where. See the wiki’s FAQ for more.

Re­lated Organizations

MIRI—Tries to make sure AI is nice to hu­mans.

CFAR—Runs work­shops that fo­cuses on be­ing use­ful to peo­ple in their ev­ery­day lives.


Of course, I may have mi­s­un­der­stood cer­tain things. Ex. I don’t feel that I have a great grasp on bayesi­anism vs. sci­ence. If so, please let me know.

Note: in some places, I ex­ag­ger­ated slightly for the sake of a smoother nar­ra­tive. I don’t feel that the ex­ag­ger­a­tions in­terfere with the spirit of the points made (DH6). If you dis­agree, please let me know by com­ment­ing.