What I’ve learned from Less Wrong

Re­lated to: Goals for which Less Wrong does (and doesn’t) help

I’ve been com­piling a list of the top things I’ve learned from Less Wrong in the past few months. If you’re new here or haven’t been here since the be­gin­ning of this blog, per­haps my per­sonal ex­pe­rience from read­ing the back-log of ar­ti­cles known as the se­quences can in­tro­duce you to some of the more use­ful in­sights you might get from read­ing and us­ing Less Wrong.

1. Things can be cor­rect—Se­ri­ously, I for­got. For the past ten years or so, I po­litely agreed with the “deeply wise” con­ven­tion that truth could never re­ally be de­ter­mined or that it might not re­ally ex­ist or that if it ex­isted any­where at all, it was only in the con­sen­sus of hu­man opinion. I think I went this route be­cause be­ing sloppy here helped me “fit in” bet­ter with so­ciety. It’s much eas­ier to be egal­i­tar­ian and re­spect ev­ery­one when you can always say “Well, I sup­pose that might be right—you never know!”

2. Beliefs are for con­trol­ling an­ti­ci­pa­tion (Not for be­ing in­ter­est­ing) - I think in the past, I looked to be­lieve sur­pris­ing, in­ter­est­ing things when­ever I could get away with the re­sults not mat­ter­ing too much. Also, in a de­sire to be ex­cep­tional, I naïvely rea­soned that be­liev­ing similar things to other smart peo­ple would prob­a­bly get me the same bor­ing life out­comes that many of them seemed to be get­ting… so I mostly tried to have ex­tra ran­dom be­liefs in or­der to give my­self a bet­ter shot at be­ing the most amaz­ingly suc­cess­ful and awe­some per­son I could be.

3. Most peo­ples’ be­liefs aren’t worth con­sid­er­ing—Since I’m no longer in­ter­ested in col­lect­ing in­ter­est­ing “be­liefs” to show off how fas­ci­nat­ing I am or give my­self bet­ter odds of out-do­ing oth­ers, it no longer makes sense to be a meme col­lect­ing, uni­ver­sal egal­i­tar­ian the same way I was be­fore. This in­cludes drop­ping the habit of se­ri­ously con­sid­er­ing all oth­ers’ im­proper be­liefs that don’t tell me what to an­ti­ci­pate and are only there for sound­ing in­ter­est­ing or smart.

4. Most of sci­ence is ac­tu­ally done by in­duc­tion—Real sci­en­tists don’t get their hy­pothe­ses by sit­ting in bath­tubs and scream­ing “Eureka!”. To come up with some­thing worth test­ing, a sci­en­tist needs to do lots of sound in­duc­tion first or bor­row an idea from some­one who already used in­duc­tion. This is be­cause in­duc­tion is the only way to re­li­ably find can­di­date hy­pothe­ses which de­serve at­ten­tion. Ex­am­ples of bad ways to find hy­pothe­ses in­clude find­ing some­thing in­ter­est­ing or sur­pris­ing to be­lieve in and then pin­ning all your hopes on that thing turn­ing out to be true.

5. I have free will—Not only is the free will prob­lem solved, but it turns out it was easy. I have the kind of free will worth car­ing about and that’s ac­tu­ally com­fort­ing since I had been un­con­sciously ig­nor­ing this out of fear that the ev­i­dence ap­peared to be go­ing against what I wanted to be­lieve. Look­ing back, I think this was ac­tu­ally kind of de­press­ing me and prob­a­bly con­tribut­ing to my at­ti­tude that hav­ing in­ter­est­ing rather than cor­rect be­liefs was fine since it looked like it might not mat­ter what I did or be­lieved any­way. Also, philoso­phers failing to uniformly mark this as “set­tled” and move on is not be­cause this is a ques­tion­able re­sult… they’re just in a world where most philoso­phers are still hav­ing trou­ble figur­ing out if god ex­ists or not. So it’s not re­ally easy to make progress on any­thing when there is more noise than sig­nal in the “philo­soph­i­cal com­mu­nity”. Come to think of it, the AI com­mu­nity and most other sci­en­tific com­mu­ni­ties have this same prob­lem… which is why I no longer read break­ing sci­ence news any­more—it’s al­most all noise.

6. Prob­a­bil­ity /​ Uncer­tainty isn’t in ob­jects or events—It’s only in minds. Sounds sim­ple af­ter you un­der­stand it, but I feel like this one in­sight of­ten al­lows me to have longer trains of thought now with­out go­ing com­pletely wrong.

7. Cry­on­ics is rea­son­able—Due to read­ing and un­der­stand­ing the quan­tum physics se­quence, I ended up con­tact­ing Rudi Hoff­man for a life in­surance quote to fund cry­on­ics. It’s only a few hun­dred dol­lars a year for me. It’s well within my bud­get for car­ing about my­self and oth­ers… such as my fu­ture selves in for­ward branch­ing multi-verses.


There are countless other im­por­tant things that I’ve learned but haven’t doc­u­mented yet. I find it pretty amaz­ing what this site has taught me in only 8 months of spo­radic read­ing. Although, to be fair, it didn’t hap­pen by ac­ci­dent or by read­ing the re­cent com­ments and pro­moted posts but al­most ex­clu­sively by read­ing all the core se­quences and then par­ti­ci­pat­ing more af­ter that.

And as a per­sonal aside (pos­si­bly some oth­ers can re­late): I still love-hate Less Wrong and find read­ing and par­ti­ci­pat­ing on this blog to be one of the most frus­trat­ing and challeng­ing things I do. And many of the peo­ple in this com­mu­nity rub me the wrong way. But in the fi­nal anal­y­sis, the as­tound­ing benefits gained make the an­noy­ing bits more than worth it.

So if you’ve been think­ing about read­ing the se­quences but haven’t been mak­ing the time do it, I sec­ond Anna’s sug­ges­tion that you get around to that. And the ra­tio­nal­ity ex­er­cise she linked to was eas­ily the sin­gle most effec­tive hour of per­sonal growth I had this year so I highly recom­mend that as well if you’re game.

So, what have you learned from Less Wrong? I’m in­ter­ested in hear­ing oth­ers’ ex­pe­riences too.