The Lens That Sees Its Flaws

Light leaves the Sun and strikes your shoelaces and bounces off; some pho­tons en­ter the pupils of your eyes and strike your retina; the en­ergy of the pho­tons trig­gers neu­ral im­pulses; the neu­ral im­pulses are trans­mit­ted to the vi­sual-pro­cess­ing ar­eas of the brain; and there the op­ti­cal in­for­ma­tion is pro­cessed and re­con­structed into a 3D model that is rec­og­nized as an un­tied shoelace; and so you be­lieve that your shoelaces are un­tied.

Here is the se­cret of de­liber­ate ra­tio­nal­ity—this whole pro­cess is not magic, and you can un­der­stand it. You can un­der­stand how you see your shoelaces. You can think about which sort of think­ing pro­cesses will cre­ate be­liefs which mir­ror re­al­ity, and which think­ing pro­cesses will not.

Mice can see, but they can’t un­der­stand see­ing. You can un­der­stand see­ing, and be­cause of that, you can do things that mice can­not do. Take a mo­ment to mar­vel at this, for it is in­deed mar­velous.

Mice see, but they don’t know they have vi­sual cor­texes, so they can’t cor­rect for op­ti­cal illu­sions. A mouse lives in a men­tal world that in­cludes cats, holes, cheese and mouse­traps—but not mouse brains. Their cam­era does not take pic­tures of its own lens. But we, as hu­mans, can look at a seem­ingly bizarre image, and re­al­ize that part of what we’re see­ing is the lens it­self. You don’t always have to be­lieve your own eyes, but you have to re­al­ize that you have eyes—you must have dis­tinct men­tal buck­ets for the map and the ter­ri­tory, for the senses and re­al­ity. Lest you think this a triv­ial abil­ity, re­mem­ber how rare it is in the an­i­mal king­dom.

The whole idea of Science is, sim­ply, re­flec­tive rea­son­ing about a more re­li­able pro­cess for mak­ing the con­tents of your mind mir­ror the con­tents of the world. It is the sort of thing mice would never in­vent. Pon­der­ing this busi­ness of “perform­ing repli­ca­ble ex­per­i­ments to falsify the­o­ries,” we can see why it works. Science is not a sep­a­rate mag­is­terium, far away from real life and the un­der­stand­ing of or­di­nary mor­tals. Science is not some­thing that only ap­plies to the in­side of lab­o­ra­to­ries. Science, it­self, is an un­der­stand­able pro­cess-in-the-world that cor­re­lates brains with re­al­ity.

Science makes sense, when you think about it. But mice can’t think about think­ing, which is why they don’t have Science. One should not over­look the won­der of this—or the po­ten­tial power it be­stows on us as in­di­vi­d­u­als, not just sci­en­tific so­cieties.

Ad­mit­tedly, un­der­stand­ing the en­g­ine of thought may be a lit­tle more com­pli­cated than un­der­stand­ing a steam en­g­ine—but it is not a fun­da­men­tally differ­ent task.

Once upon a time, I went to EFNet’s #philos­o­phy cha­t­room to ask, “Do you be­lieve a nu­clear war will oc­cur in the next 20 years? If no, why not?” One per­son who an­swered the ques­tion said he didn’t ex­pect a nu­clear war for 100 years, be­cause “All of the play­ers in­volved in de­ci­sions re­gard­ing nu­clear war are not in­ter­ested right now.” “But why ex­tend that out for 100 years?” I asked. “Pure hope,” was his re­ply.

Reflect­ing on this whole thought pro­cess, we can see why the thought of nu­clear war makes the per­son un­happy, and we can see how his brain there­fore re­jects the be­lief. But if you imag­ine a billion wor­lds—Everett branches, or Teg­mark du­pli­cates1—this thought pro­cess will not sys­tem­at­i­cally cor­re­late op­ti­mists to branches in which no nu­clear war oc­curs.2

To ask which be­liefs make you happy is to turn in­ward, not out­ward—it tells you some­thing about your­self, but it is not ev­i­dence en­tan­gled with the en­vi­ron­ment. I have noth­ing against hap­piness, but it should fol­low from your pic­ture of the world, rather than tam­per­ing with the men­tal paint­brushes.

If you can see this—if you can see that hope is shift­ing your first-or­der thoughts by too large a de­gree—if you can un­der­stand your mind as a map­ping en­g­ine that has flaws—then you can ap­ply a re­flec­tive cor­rec­tion. The brain is a flawed lens through which to see re­al­ity. This is true of both mouse brains and hu­man brains. But a hu­man brain is a flawed lens that can un­der­stand its own flaws—its sys­tem­atic er­rors, its bi­ases—and ap­ply sec­ond-or­der cor­rec­tions to them. This, in prac­tice, makes the lens far more pow­er­ful. Not perfect, but far more pow­er­ful.

1Max Teg­mark, “Par­allel Uni­verses,” in Science and Ul­ti­mate Real­ity: Quan­tum The­ory, Cos­mol­ogy, and Com­plex­ity, ed. John D. Bar­row, Paul C. W. Davies, and Charles L. Harper Jr. (New York: Cam­bridge Univer­sity Press, 2004), 459–491, http://​​arxiv.org/​​abs/​​as­tro-ph/​​0302131.

2Some clever fel­low is bound to say, “Ah, but since I have hope, Ill work a lit­tle harder at my job, pump up the global econ­omy, and thus help to pre­vent coun­tries from slid­ing into the an­gry and hope­less state where nu­clear war is a pos­si­bil­ity. So the two events are re­lated af­ter all.” At this point, we have to drag in Bayes’s The­o­rem and mea­sure the re­la­tion­ship quan­ti­ta­tively. Your op­ti­mistic na­ture can­not have that large an effect on the world; it can­not, of it­self, de­crease the prob­a­bil­ity of nu­clear war by 20%, or how­ever much your op­ti­mistic na­ture shifted your be­liefs. Shift­ing your be­liefs by a large amount, due to an event that only slightly in­creases your chance of be­ing right, will still mess up your map­ping.