Taking Ideas Seriously

I, the au­thor, no longer en­dorse this post.

Ab­strum­mary: I de­scribe a cen­tral tech­nique of epistemic ra­tio­nal­ity that bears di­rectly on in­stru­men­tal ra­tio­nal­ity, and that I do not be­lieve has been ex­plic­itly dis­cussed on Less Wrong be­fore. The techn­nique is rather sim­ple: it is the prac­tice of tak­ing ideas se­ri­ously. I also pre­sent the rather sim­ple metaphor of an ‘in­ter­con­nected web of be­lief nodes’ (like a Bayesian net­work) to de­scribe what it means to take an idea se­ri­ously: it is to up­date a be­lief and then ac­cu­rately and com­pletely prop­a­gate that be­lief up­date through the en­tire web of be­liefs in which it is em­bed­ded. I then give a few ex­am­ples of ideas to take se­ri­ously, fol­lowed by rea­sons to take ideas se­ri­ously and what bad things hap­pens if you don’t (or so­ciety doesn’t). I end with a few ques­tions for Less Wrong.

Eliezer Yud­kowsky and Michael Vas­sar are two ra­tio­nal­ists who have some­thing of an aura of for­mad­abil­ity about them. This is es­pe­cially true of Michael Vas­sar in live con­ver­sa­tion, where he’s al­lowed to jump around from con­cept to con­cept with­out be­ing pe­nal­ized for not hav­ing a strong the­sis. Eliezer did some­thing similar in his writ­ing by cre­at­ing a foun­da­tion of rea­son upon which he could build new con­cepts with­out hav­ing to start ex­plain­ing ev­ery­thing anew ev­ery time. Michael and Eliezer know a lot of stuff, and are able to make con­nec­tions be­tween the things that they know; see­ing which nodes of knowl­edge are rele­vant to their be­liefs or de­ci­sion, or if that fails, know­ing which al­gorithm they should use to figure out which nodes of knowl­edge are likely to be rele­vant. They have all the stan­dard Less Wrong ra­tio­nal­ity tools too, of course, and a fair amount of heuris­tics and dis­po­si­tions that haven’t been cov­ered on Less Wrong. But I be­lieve it is this as­pect of their ra­tio­nal­ity, the co­her­ent and co­he­sive and care­fully bal­anced web of knowl­edge and be­lief nodes, that causes peo­ple to per­ceive them as formidable ra­tio­nal­ists, of a kind not to be dis­agreed with lightly.

The com­mon trait of Michael and Eliezer and all top tier ra­tio­nal­ists is their drive to re­ally con­sider the im­pli­ca­tions and re­la­tion­ships of their be­liefs. It’s some­thing like a failure to com­part­men­tal­ize; it’s what has led them to de­vel­op­ing their spe­cific webs of knowl­edge, in­stead of de­vel­op­ing one web of be­liefs about poli­tics that is com­pletely sep­a­rate from their webs of be­lief about re­li­gion, or sci­ence, or ge­og­ra­phy. Com­part­men­tal­iza­tion is the nat­u­ral and au­to­matic pro­cess by which be­lief nodes or groups of be­liefs nodes be­come iso­lated from their over­ar­ch­ing web of be­liefs, or many in­de­pen­dent webs are cre­ated, or the threads be­tween nodes are not care­fully and pre­cisely main­tained. It is the ground state of your av­er­age sci­en­tist. When Eliezer first read about the idea of a Sin­gu­lar­ity, he didn’t do ex­actly what I and prob­a­bly al­most any­body in the world would have done at that mo­ment: he didn’t think “Wow, that’s pretty neat!” and then go on to study string the­ory. He im­me­di­ately saw that this was an idea that needed to be taken se­ri­ously, a be­lief node of great im­por­tance that nec­es­sar­ily af­fects ev­ery other be­lief in the web. It’s some­thing that I don’t have nat­u­rally (not that it’s ei­ther bi­nary or ge­netic), but it’s a skill that I’m rea­son­ably sure can be picked up and used im­me­di­ately, as long as you have a de­cent grasp of the fun­da­men­tals of ra­tio­nal­ity (as can be found in the Se­quences).

Tak­ing an idea se­ri­ously means:

  • Look­ing at how a new idea fits in with your model of re­al­ity and check­ing for con­tra­dic­tions or ten­sions that may in­di­cate the need of up­dat­ing a be­lief, and then prop­a­gat­ing that be­lief up­date through the en­tire web of be­liefs in which it is em­bed­ded. When a be­lief or a set of be­liefs change that can in turn have huge effects on your over­ar­ch­ing web of in­ter­con­nected be­liefs. (The best ex­am­ple I can think of is re­li­gious de­con­ver­sion: there are a great many things you have to change about how you see the world af­ter de­con­ver­sion, even de­con­ver­sion from some­thing like deism. I some­times wish I could have had such an ex­pe­rience. I can only imag­ine that it must feel both ter­rify­ing and ex­hil­arat­ing.) Failing to prop­a­gate that change leads to trou­ble. Com­part­men­tal­iza­tion is dan­ger­ous.

  • Notic­ing when an idea seems to be de­scribing a part of the ter­ri­tory where you have no map. Draw­ing a rough sketch of the newfound ter­ri­tory and then see­ing in what ways that changes how you un­der­stand the parts of the ter­ri­tory you’ve already mapped.

  • Not just ex­am­in­ing an idea’s sur­face fea­tures and then ac­cept­ing or dis­miss­ing it. In­stead look­ing for deep causes. Not in­ter­nally play­ing a game of refer­ence class ten­nis.

  • Ex­plic­itly rea­son­ing through why you think the idea might be cor­rect or in­cor­rect, what im­pli­ca­tions it might have both ways, and leav­ing a line of re­treat in both di­rec­tions. Hav­ing some­thing to pro­tect should fuel your cu­ri­os­ity and pre­vent mo­ti­vated stop­ping.

  • Notic­ing con­fu­sion.

  • Rec­og­niz­ing when a true or false be­lief about an idea might lead to dras­tic changes in ex­pected util­ity.

There are many ideas that should be taken a lot more se­ri­ously, both by so­ciety and by Less Wrong speci­fi­cally. Here are a few:

  • Ex­is­ten­tial risks and the pos­si­bil­ities for meth­ods of pre­ven­tion thereof.

  • Molec­u­lar nan­otech­nol­ogy.

  • The tech­nolog­i­cal sin­gu­lar­ity (es­pe­cially timelines and plan­ning).

  • Cry­on­ics.

  • World eco­nomic col­lapse.

Some po­ten­tially im­por­tant ideas that I read­ily ad­mit to not yet hav­ing taken se­ri­ously enough:

  • Molec­u­lar nan­otech­nol­ogy timelines.

  • Ways to pro­tect against bioter­ror­ism.

  • The effects of drugs of var­i­ous kinds and method­olo­gies for re­search­ing them.

  • In­tel­li­gence am­plifi­ca­tion.

And some ideas that I did not im­me­di­ately take se­ri­ously when I should have:

  • Teg­mark’s mul­ti­verses and re­lated cos­mol­ogy and the manyfold im­pli­ca­tions thereof (and the re­lated simu­la­tion ar­gu­ment).

  • The sub­jec­tive for-Will-New­some-per­son­ally ir­ra­tional­ity of cry­on­ics.1

  • EMP at­tacks.

  • Up­date­less-like de­ci­sion the­ory and the im­pli­ca­tions thereof.

  • That philo­soph­i­cal and es­pe­cially meta­phys­i­cal in­tu­itions are not strong ev­i­dence.

  • The idea of tak­ing ideas se­ri­ously.

  • And var­i­ous things that I prob­a­bly should have taken se­ri­ously, and would have if I had known how to, but that I now for­get be­cause I failed to grasp their grav­ity at the time.

I also sus­pect that there are ideas that I should be tak­ing se­ri­ously but do not yet know enough about; for ex­am­ple, maybe some­thing to do with my diet. I could very well be poi­son­ing my­self and my cog­ni­tion with­out know­ing it be­cause I haven’t looked into the pos­si­ble dan­gers of the var­i­ous things I eat. Maybe corn syrup is bad for me? I dunno; but no­body’s ever sat me down and told me I should look into it, so I haven’t. That’s the prob­lem with ideas that re­ally de­serve to be taken se­ri­ously: it’s very rare that some­one will take the time to make you do the re­search and re­ally think about it in a ra­tio­nal and pre­cise man­ner. They won’t call you out when you fail to do so. They won’t hold you to a high stan­dard. You must hold your­self to that stan­dard, or you’ll fail.

Why should you take ideas se­ri­ously? Well, if you have Some­thing To Pro­tect, then the an­swer is ob­vi­ous. That’s always been my in­spira­tion for tak­ing ideas se­ri­ously: I force my­self to in­ves­ti­gate any way to help that which I value to flour­ish. This man­i­fests on both the small and the large scale: if a friend is go­ing to get a med­i­cal op­er­a­tion, I re­search the rele­vant liter­a­ture and make sure that the op­er­a­tion works or that it’s safe. And if I find out that the de­vel­op­ment of an unFriendly ar­tifi­cial in­tel­li­gence might lead to the pointless de­struc­tion of ev­ery­one I love and ev­ery­thing I care about and any value that could be ex­tracted from this vast uni­verse, then I re­search the rele­vant liter­a­ture there, too. And then I keep on re­search­ing. What if you don’t have Some­thing To Pro­tect? If you sim­ply have a de­sire to figure out the world—maybe not an ex­plicit de­sire for intsru­men­tal ra­tio­nal­ity, but at least epistemic ra­tio­nal­ity—then tak­ing ideas se­ri­ously is the only way to figure out what’s ac­tu­ally go­ing on. For some­one pas­sion­ate about an­swer­ing life’s fun­da­men­tal ques­tions to miss out on Teg­mark’s cos­mol­ogy is truly tragic. That per­son is los­ing a vista of amaz­ing per­spec­tives that may or may not end up al­low­ing them to find what they seek, but at the very least is go­ing to change for the bet­ter the way they think about the world.

Failure to take ideas se­ri­ously can lead to all kinds of bad out­comes. On the so­cietal level, it leads to a world where al­most no at­ten­tion is paid to catas­trophic risks like nu­clear EMP at­tacks. It leads to sci­en­tists talk­ing about spiritu­al­ity with a tone of rev­er­ence. It leads to statis­ti­ci­ans play­ing the lot­tery. It leads to an academia where an AGI re­searcher who com­pletely un­der­stands that a uni­verse is nat­u­ral­is­tic and be­yond the reach of God fails to re­al­ize that this means an AGI could be re­ally, re­ally dan­ger­ous. Even peo­ple who make en­tire ca­reers out of an idea some­how fail to take it se­ri­ously, to see its im­pli­ca­tions and how it should move in perfect al­ign­ment with ev­ery sin­gle one of their ac­tions and be­liefs. If we could move in such perfect al­ign­ment, we would be gods. To be a god is to see the in­ter­con­nect­ed­ness of all things and shape re­al­ity ac­cord­ingly. We’re not even close. (I hear some folks are work­ing on it.) But if we are to be­come stronger that is the ideal we must ap­prox­i­mate.

Now, I must dis­claim: tak­ing cer­tain ideas se­ri­ously is not always best for your men­tal health. There are some cases where it is best to rec­og­nize this and move on to other ideas. Brains are frag­ile and some ideas are viruses that cause chaotic mu­ta­tions in your web of be­liefs. Cu­ri­os­ity and dilli­gence are not always your friend, and even those with ex­cep­tion­ally high SAN points can’t read too much El­dritch lore be­fore hav­ing to re­treat. Not only can ig­no­rance be bliss, it can also be the in­stru­men­tally ra­tio­nal state of mind.2

What are ideas you think Less Wrong hasn’t taken se­ri­ously? Which haven’t you taken se­ri­ously, but would like to once you find the time or gain the pre­req­ui­site knowl­edge? Is it best to have many loosely con­nected webs of be­lief, or one tightly in­te­grated one? Do you have ex­am­ples of a fully ex­e­cuted be­lief up­date lead­ing to mas­sive or chaotic changes in a web of be­lief? Alzheimer’s dis­ease may be con­sid­ered an ‘up­date’ where parts of the web of be­lief are sim­ply erased, and I’ve already listed de­con­ver­sion as an­other. What kinds of ad­van­tages could com­part­men­tal­iza­tion give a ra­tio­nal­ist?


1 I should write a post about rea­sons for peo­ple un­der 30 not to sign up for cry­on­ics. How­ever, do­ing so would re­quire writ­ing a post about Sin­gu­lar­ity timelines, and I re­ally re­ally don’t want to write that one. It seems that a lot of LWers have AGI timelines that I would con­sider… erm, ridicu­lous. I’ve asked Peter de Blanc to bear the bur­den of proof and I’m go­ing to bug him about it ev­ery day un­til he writes up the ar­ti­cle.

2 If you snarl at this idea, try play­ing with this Li­tany, and then play­ing with how you play with this Li­tany:

If be­liev­ing some­thing that is false gets me util­ity,
I de­sire to be­lieve in that falsity;
If be­liev­ing some­thing that is true gets me util­ity,
I de­sire to be­lieve in that truth;
Let me not be­come at­tached to states of be­lief that do not get me util­ity.