Don’t Believe Wrong Things

This is cross-posted from Pu­tanu­monit.com, you can jump in the dis­cus­sion in ei­ther place.


LessWrong has a rep­u­ta­tion for be­ing a place where dry and earnest peo­ple write dry and earnest es­says with ti­tles like “Don’t Believe Wrong Things”. A ca­sual vis­i­tor wouldn’t ex­pect it to host lively dis­cus­sions of prophets, of wiz­ards, and of achiev­ing en­light­en­ment. And yet, each of the above links does lead to LessWrong, and each post (in­clud­ing mine) has more than a hun­dred com­ments.

The dis­cus­sion of­ten turns to a de­bate that rages eter­nal in the ra­tio­nal­ist com­mu­nity: cor­rect­ness vs. use­ful­ness. Ra­tion­al­ity is about hav­ing true be­liefs, we are told, but ra­tio­nal­ists should also win. Win­ning, aka in­stru­men­tal ra­tio­nal­ity, sure sounds a lot more fun than just be­liev­ing true things (epistemic ra­tio­nal­ity). Peo­ple are tempted to con­sider it as the pri­mary goal of ra­tio­nal­ity, with the pur­suit of truth be­ing sec­ondary.

Men­tions of the “use­ful but in­cor­rect”, which is how I see Jor­dan Peter­son, in­vite com­ments like this:

A cor­rect episte­molog­i­cal pro­cess is likely to as­sign very low like­li­hood to the propo­si­tion of Chris­ti­an­ity be­ing true at some point. Even if Chris­ti­an­ity is true, most Chris­ti­ans don’t have good epistemics be­hind their Chris­ti­an­ity; so if there ex­ists an epistem­i­cally jus­tifi­able ar­gu­ment for ‘be­ing a Chris­tian’, our hy­po­thet­i­cal cra­dle-Chris­tian ra­tio­nal­ist is likely to reach the nec­es­sary epistemic skill level to see through the Chris­tian apolo­get­ics he’s in­her­ited be­fore he dis­cov­ers it.

At which point he starts sleep­ing in on Sun­days; loses the so­cial cap­i­tal he’s ac­cu­mu­lated through church; has a much harder time fit­ting in with Chris­tian so­cial groups; and cas­cades up­dates in ways that are, given the so­cial re­al­ities of the United States and similar coun­tries, likely to draw him to­ward other move­ments and be­hav­ior pat­terns, some of which are even more harm­ful than most de­nom­i­na­tions of Chris­ti­an­ity, and away from the an­thro­polog­i­cal ac­cu­mu­la­tions that cor­re­late with Chris­ti­an­ity, some of which may be harm­ful but some of which may be pro­tect­ing against harms that aren’t ob­vi­ous even to those with good epistemics. Oops! Is our ra­tio­nal­ist win­ning?
[…]
epistemic ra­tio­nal­ity is im­por­tant be­cause it’s im­por­tant for in­stru­men­tal ra­tio­nal­ity. But the thing we’re in­ter­ested in is in­stru­men­tal ra­tio­nal­ity, not epistemic ra­tio­nal­ity. If the in­stru­men­tal benefits of be­ing a Chris­tian out­weigh the in­stru­men­tal harms of be­ing a Chris­tian, it’s in­stru­men­tally ra­tio­nal to be a Chris­tian. If Chris­ti­an­ity is false and it’s in­stru­men­tally ra­tio­nal to be a Chris­tian, epistemic ra­tio­nal­ity con­flicts with in­stru­men­tal ra­tio­nal­ity.

Well, it’s time for a dry and earnest es­say (prob­a­bly over­due af­ter last week’s grapefruits) on the ques­tion of in­stru­men­tal vs. epistemic ra­tio­nal­ity. I am not break­ing any ground that wasn’t pre­vi­ously cov­ered in the Se­quences etc., but I be­lieve that this ex­er­cise is jus­tified in the spirit of non-ex­pert ex­pla­na­tion.

I will at­tempt to:

  1. Dis­solve a lot of the di­chotomy be­tween “use­ful” and “cor­rect”, via some ex­am­ples that use “wrong” wrong.

  2. Of the di­chotomy that re­mains, po­si­tion my­self firmly on the cor­rect side of the de­bate.

  3. Suggest that con­vinc­ing your­self of some­thing wrong is, in fact, pos­si­ble and should be guarded vigilantly against.

  4. Say some more in praise of fake frame­works, and what they mean if they don’t mean “be­liev­ing in false things”.

Wrong and Less Wrong

What does “truth” mean, for ex­am­ple in the defi­ni­tion of epistemic ra­tio­nal­ity as “the pur­suit of true be­liefs about the world”? I think that a lot of the ap­par­ent con­flict be­tween the “use­ful” and “true” stems from con­fu­sion about the lat­ter word that isn’t merely se­man­tic. As ex­em­plars of this con­fu­sion, I will use Brian Lui’s posts: wrong mod­els are good, cor­rect mod­els are bad, and use­ful mod­els are bet­ter than cor­rect mod­els.

I have cho­sen Brian as a foil be­cause:

  1. We ac­tu­ally dis­agree, but both do so in good faith.

  2. I asked him if I could, and he said OK.

Here are some ex­am­ples that Brian uses:

Cor­rect Models

Schröd­inger’s model | Calorie-in-calorie-out | Big 5 per­son­al­ity | Spheric Earth

Use­ful Model

Bohr’s atomic model | Fo­cus on satiety | MBTI | Flat Earth

You may be fa­mil­iar with Asi­mov’s quote:

“When peo­ple thought the earth was flat, they were wrong. When peo­ple thought the earth was spher­i­cal, they were wrong. But if you think that think­ing the earth is spher­i­cal is just as wrong as think­ing the earth is flat, then your view is wronger than both of them put to­gether.”

Peo­ple of­ten over­look the broader con­text of the quote. Asi­mov makes the point that Flat Earth is ac­tu­ally a very good model. Other mod­els could posit an Earth with in­finitely tall moun­tains or bot­tom­less trenches, or per­haps an Earth tilted in such a way that walk­ing north-west would always be up­hill. A flat Earth model, built on em­piri­cism and logic, is quite an achieve­ment:

Per­haps it was the ap­pear­ance of the plain that per­suaded the clever Sume­ri­ans to ac­cept the gen­er­al­iza­tion that the earth was flat; that if you some­how evened out all the ele­va­tions and de­pres­sions, you would be left with flat­ness. Con­tribut­ing to the no­tion may have been the fact that stretches of wa­ter (ponds and lakes) looked pretty flat on quiet days.

A model is cor­rect or not in the con­text of a spe­cific ques­tion asked of it, such as “Will I ar­rive back home from the east if I keep sailing west?” The flat Earth model was perfectly fine un­til that ques­tion was asked, and the first transoceanic voy­ages took place more than 1,000 af­ter Eratos­thenes calcu­lated the spher­i­cal Earth’s ra­dius with pre­ci­sion.

But it’s not just the “wrong” mod­els that are true, the op­po­site is also the case, as fa­mously no­ticed by Ge­orge Box. The Earth’s shape isn’t a sphere. It’s not even a geoid, it changes mo­ment by mo­ment with the tides, plate tec­ton­ics, and ants build­ing anthills. Brian’s di­vi­sion of mod­els into the cor­rect and in­cor­rect starts to seem some­what ar­bi­trary, so what is it based on?

Brian con­sid­ers the Big 5 per­son­al­ity model to more “cor­rect” and “sci­en­tific” be­cause it was cre­ated us­ing fac­tor anal­y­sis, while My­ers-Briggs is based on Jung’s con­cep­tual the­ory. But the trap­pings of sci­ence don’t make a the­ory true, par­tic­u­larly when the sci­ence in ques­tion has a fraught re­la­tion­ship with the truth. How “sci­en­tific” a pro­cess was used to gen­er­ate a model can cor­re­late with its truth­ful­ness, but as a defi­ni­tion it seems to miss the mark en­tirely.

Ra­tion­al­ists usu­ally mea­sure the truth of a model by the rent it payswhen it col­lides with re­al­ity. Nei­ther MBTI nor Big 5 does a whole lot of use­ful pre­dic­tion, and they’re not even as fun as the MTG color sys­tem. On the other hand, Bohr’s atomic model works for most ques­tions of ba­sic chem­istry and even the pho­to­elec­tric effect.

A model is wrong not be­cause it is not pre­cisely quan­tified (like satiety), or be­cause it wasn’t pub­lished in a sci­ence jour­nal (like MBTI), or be­cause it has been su­per­seded by a more re­duc­tion­ist model (like Bohr’s atom). It is wrong when it pre­dicts things that don’t hap­pen or pro­hibits things that do.

When a model’s pre­dic­tions and pro­hi­bi­tions line up with ob­serv­able re­al­ity, the model is true. When those pre­dic­tions are easy to make and check, it is use­ful. Calorie-in-calorie-out isn’t very use­ful on the ques­tion of suc­cess­ful diet­ing be­cause it is so difficult for peo­ple to just change their caloric bal­ance as an im­me­di­ate ac­tion. This difficulty doesn’t make this model any more or less cor­rect, it just means that it’s hard to es­tab­lish its cor­rect­ness from see­ing whether peo­ple who try to count calories lose weight or not. In this view truth and use­ful­ness are al­most or­thog­o­nal: truth is a pre­con­di­tion for use­ful­ness, while some mod­els are so wrong that they are worse than use­less.

Je­sus and Gandhi

Use­ful­ness, in the sense of be­liefs pay­ing rent, is a nar­rower con­cept than win­ning, e.g., mak­ing money to pay your ac­tual rent. The com­ment about the lapsed Chris­tian I quoted talks about in­stru­men­tal ra­tio­nal­ity as the pur­suit of ac­tu­ally win­ning in life. So, is the re­jec­tion of Christ epistem­i­cally ra­tio­nal but in­stru­men­tally ir­ra­tional?

First of all, I think that the main mis­take the hy­po­thet­i­cal apos­tate is mak­ing is a bucket er­ror. In his mind, there is a sin­gle vari­able la­beled “Chris­ti­an­ity” which con­tains a boolean value: True or False. This sin­gle vari­able serves as an an­swer to many dis­tinct ques­tions, such as:

  1. Did Je­sus die for my sins?

  2. Should I go to church on Sun­day?

  3. Should I be nice to my Chris­tian friends?

There is no rea­son why all three ques­tions must have the same an­swer, as demon­strated by my closet-athe­ist friend who lives in an Ortho­dox Jewish com­mu­nity. The rent in the Jewish part of Brook­lyn is pretty cheap (win­ning!) and doesn’t de­pend on one’s be­liefs about rev­e­la­tion. Liv­ing a dou­ble life is not ideal, and it is some­what harder to fit in a re­li­gious com­mu­nity if you’re a non-be­liever. But care­lessly prop­a­gat­ing new be­liefs be­fore sort­ing out the buck­ets in one’s head is much more dan­ger­ous than zon­ing out dur­ing prayer times. Keep­ing be­hav­iors that cor­re­late with a false be­lief is very differ­ent from in­stal­ling new be­liefs to change one’s be­hav­ior.

In­for­ma­tion haz­ards are also a thing. There are many real things that we wish other peo­ple wouldn’t know, and some things that we wouldn’t want to learn our­selves. But avoid­ing true but dan­ger­ous knowl­edge is also very differ­ent than hunt­ing false be­liefs.

With that said, what if hunt­ing and in­stal­ling false be­liefs is ac­tu­ally jus­tified? A friend of mine who’s a big fan of Jor­dan Peter­son is jok­ing-not-jok­ing about con­vert­ing to Chris­ti­an­ity. If Chris­ti­an­ity pro­vides one with friends, mean­ing, and pro­tec­tion from harm­ful ide­olo­gies, isn’t it in­stru­men­tally ra­tio­nal to con­vert?

There’s a word for this sort of bar­gain: Faus­tian. One should always imag­ine this spo­ken by some­one with red­dish skin, twisty horns, and an ex­pen­sive suit. I offer you all this, and all I want in re­turn is a tiny bit of epistemic ra­tio­nal­ity. What’s it even worth to you?

Epistemic ra­tio­nal­ity is worth a lot.

It takes a lot of epistemic ra­tio­nal­ity to tease apart cau­sa­tion from the mere cor­re­la­tion of re­li­gion with its benefits. Per­haps a Chris­tian’s com­mu­nity likes him be­cause con­sis­tent be­liefs make a per­son pre­dictable; this benefit wouldn’t ex­tend to a fresh con­vert. As for mean­ing and pro­tec­tion from ad­verse memes, are those pro­vided by Je­sus or by the com­mu­nity it­self? Or by some con­founder like age or ge­og­ra­phy?

A per­son dis­cern­ing enough on mat­ters of friend­ship to judge whether it is the cause or the effect of Chris­tian be­lief prob­a­bly un­der­stands friend­ship well enough to make friends with or with­out con­vert­ing. I help run a weekly meetup of ra­tio­nal­ists in New York. We think a lot about build­ing an ac­tive com­mu­nity, and we im­ple­ment this in prac­tice. We may not provide the full spiritual pack­age of a church, but we also don’t de­mand a steep price from our mem­bers: nei­ther in money, nor in effort, nor in dogma.

Per­haps con­vert­ing is the in­stru­men­tally op­ti­mal thing to do for a young ra­tio­nal­ist, but it would re­quire heroic epistemic ra­tio­nal­ity to know that it is so. And once you have con­verted, that epistemic ra­tio­nal­ity is gone for­ever, along with the abil­ity to rea­son well about such trade-offs in the fu­ture. If you dis­cover a new re­li­gion to­mor­row that offers ten times the benefits of Chris­ti­an­ity, it would be too late: your new be­lief in the truth of Chris­ti­an­ity will pre­vent you from even con­sid­er­ing the op­tion of re­con­vert­ing to the new re­li­gion.

This ar­gu­ment is col­lo­quially known as The Le­gend of Mur­der-Gandhi. Should Gandhi, who ab­hors vi­o­lence, take a pill that makes him 99% as re­luc­tant to com­mit mur­der for a mil­lion dol­lars? No, be­cause 99%-paci­fist Gandhi will not hes­i­tate to take an­other pill and go to 98%, and then down to 97%, and to 90%,

and so on un­til he’s ram­pag­ing through the streets of Delhi, kil­ling ev­ery­thing in sight.

An ex­cep­tion could be made if Gandhi had a way to com­mit him­self to stop­ping at 95% paci­fism; that’s still paci­fist enough that he doesn’t re­ally need to worry about act­ing vi­o­lently, yet $5 mil­lion richer.

But epistemic ra­tio­nal­ity is a higher-level skill than mere paci­fism. It’s the skill that’s nec­es­sary not only to as­sess a sin­gle trade-off, but also to un­der­stand the dan­gers of slip­pery slopes, and the benefits of pre-com­mit­ments, and the need for Func­tional De­ci­sion The­ory in a world full of New­comblike prob­lems. Gandhi who’s perfectly paci­fist but doesn’t un­der­stand Schel­ling fences will take the first pill, and all his paci­fism will be for naught.

Do you think you have enough epistemic ra­tio­nal­ity to de­ter­mine when it’s re­ally worth sac­ri­fic­ing epistemic ra­tio­nal­ity for some­thing else? Bet­ter to keep in­creas­ing your epistemic ra­tio­nal­ity, just to be sure.

Flat Moon Society

Is this a moot point, though? It’s not like you can make your­self go to sleep an athe­ist and wake up a de­vout Chris­tian to­mor­row. Eliezer wrote a whole se­quence on the in­abil­ity to self-de­ceive:

We do not have such di­rect con­trol over our be­liefs. You can­not make your­self be­lieve the sky is green by an act of will. You might be able to be­lieve you be­lieved it—though I have just made that more difficult for you by point­ing out the differ­ence. (You’re wel­come!) You might even be­lieve you were happy and self-de­ceived; but you would not in fact be happy and self-de­ceived.
[…]
You can’t know the con­se­quences of be­ing bi­ased, un­til you have already de­bi­ased your­self. And then it is too late for self-de­cep­tion.
The other al­ter­na­tive is to choose blindly to re­main bi­ased, with­out any clear idea of the con­se­quences. This is not sec­ond-or­der ra­tio­nal­ity. It is willful stu­pidity.

He gives an ex­am­ple of very pe­cu­liar Ortho­dox Jew:

When this woman was in high school, she thought she was an athe­ist. But she de­cided, at that time, that she should act as if she be­lieved in God. And then—she told me earnestly—over time, she came to re­ally be­lieve in God.
So far as I can tell, she is com­pletely wrong about that. Always through­out our con­ver­sa­tion, she said, over and over, “I be­lieve in God”, never once, “There is a God.” When I asked her why she was re­li­gious, she never once talked about the con­se­quences of God ex­ist­ing, only about the con­se­quences of be­liev­ing in God. Never, “God will help me”, always, “my be­lief in God helps me”. When I put to her, “Some­one who just wanted the truth and looked at our uni­verse would not even in­vent God as a hy­poth­e­sis,” she agreed out­right.

She hasn’t ac­tu­ally de­ceived her­self into be­liev­ing that God ex­ists or that the Jewish re­li­gion is true. Not even close, so far as I can tell.

On the other hand, I think she re­ally does be­lieve she has de­ceived her­self.

But even­tu­ally, he ad­mits that be­liev­ing you won’t self-de­ceive is also some­what of a self-fulfilling prophecy:

It may be wise to go around de­liber­ately re­peat­ing “I can’t get away with dou­ble-think­ing! Deep down, I’ll know it’s not true! If I know my map has no rea­son to be cor­re­lated with the ter­ri­tory, that means I don’t be­lieve it!”

Be­cause that way—if you’re ever tempted to try—the thoughts “But I know this isn’t re­ally true!” and “I can’t fool my­self!” will always rise read­ily to mind; and that way, you will in­deed be less likely to fool your­self suc­cess­fully. You’re more likely to get, on a gut level, that tel­ling your­self X doesn’t make X true: and there­fore, re­ally truly not-X.

To me the se­quence’s mes­sage is “don’t do it!” rather than “it’s im­pos­si­ble!”. If self-de­cep­tion were im­pos­si­ble, there would be no need for in­junc­tions against it.

Self-de­cep­tion definitely isn’t easy. A good friend of mine told me about two guys he knows who are as­piring flat-Earthers. Out of the pure joy of con­trar­i­anism, the two have spent countless hours watch­ing flat-Earth apolo­gia on YouTube. So far their yearn­ing for glo­beless epiphany hasn’t been an­swered, al­though they aren’t giv­ing up.

A coworker of mine feels that ev­ery per­son should be­lieve in at least one crazy con­spir­acy the­ory, and so he says that he con­vinced him­self that the moon land­ing was faked. It’s hard to tell if he fully be­lieves it, but he prob­a­bly be­lieves it some­what. His ac­tual be­liefs about NASA have changed, not just his be­liefs-in-self-de­cep­tion. Per­haps ear­lier in life, he would have bet that the moon land­ing was staged in a movie stu­dio at mil­lion-to-one odds, and now he’ll take that bet at 100:1.

He is cer­tainly less likely to dis­count the other opinions of moon land­ing-skep­tics, which leaves him a lot more vuln­er­a­ble to be­ing con­vinced of bul­lshit in the fu­ture. And the mere be­lief-in-be­lief is still a wrong be­lief that was cre­ated in his mind ex-nihilo. My col­league clearly sac­ri­ficed some amount of epistemic ra­tio­nal­ity, al­though it’s un­clear what he got in re­turn.

Self-de­cep­tion works like de­cep­tion. False be­liefs sneak into your brain the same way a grapefruit does.

  1. First, we hear some­thing stated as fact: the moon land­ing was staged. Our brain’s im­me­di­ate re­ac­tion on a neu­rolog­i­cal level to a new piece of in­for­ma­tion is to be­lieve it. Only when prop­a­gat­ing the in­for­ma­tion shows it to be in con­flict with prior be­liefs is it dis­carded. But noth­ing can ever be dis­carded en­tirely by our brains, and a small trace re­mains.

  2. We come across the same in­for­ma­tion a few more times. Now, the brain rec­og­nizes it as fa­mil­iar, which means that it an­chors it­self deeper into the brain even if it is dis­be­lieved ev­ery time. The traces ac­cu­mu­late. Was the footage of the moon land­ing re­ally all it seemed?

  3. Per­haps we as­so­ci­ate a pos­i­tive feel­ing with the be­lief. Wouldn’t it be cool if the Ap­polo mis­sions never hap­pened? This means that I can still be the first hu­man on the moon!

  4. Even if we still don’t be­lieve the origi­nal lie when ques­tion­ing it di­rectly, it still oc­cu­pies some ter­ri­tory in our head. Ad­ja­cent be­liefs get re­in­forced through con­fir­ma­tion bias, which in turn re­in­forces the origi­nal lie. If the “land­ing” was re­ally shot on the moon, why was the flag rip­pling in the wind? Wait, is the flag ac­tu­ally rip­pling? We don’t re­mem­ber, it’s not like we watch moon land­ing footage ev­ery day. But now we be­lieve that the flag was rip­pling, which re­in­forces the be­lief that the moon land­ing was fake.

  5. We for­get where we ini­tially learned the in­for­ma­tion from. Even if the origi­nal claim about the moon fak­ery was pre­sented as un­true and im­me­di­ately de­bunked, we will just re­mem­ber that we heard some­where that it was all an elab­o­rate pro­duc­tion to fool the Rus­si­ans. We re­call that we used to be re­ally skep­ti­cal of the claim once, but it sure feels like a lot of ev­i­dence has been point­ing that way re­cently…

It is eas­iest to break this chain on step 1 – avoid putting trash into your brain. As an ex­am­ple, I will never read the Trump ex­posé Fire and Fury un­der any cir­cum­stances, and im­plore my friends to do the same. Prac­ti­cally ev­ery­one agrees that the book has ten ru­mors and made up sto­ries for ev­ery sin­gle ver­ifi­able fact, but if you read the book, you don’t know which is which. If you’re the kind of per­son who’s already in­clined to be­lieve any­thing and ev­ery­thing about Don­ald Trump, read­ing the book will in­evitably make you stupi­der and less in­formed about the pres­i­dent. And this “kind of per­son” ap­par­ently in­cludes most of the coun­try, be­cause no par­ody of Fire and Fury has been too out­landish to be be­lieved.

Take the Glasses Off

So, what are “fake frame­works” and what do they have to do with all of this?

I use a lot of fake frame­works — that is, ways of see­ing the world that are prob­a­bly or ob­vi­ously wrong in some im­por­tant way.
[…]
As­sume the in­tu­ition is wrong. It’s fake. And then use it any­way.

It al­most sounds as if Val is say­ing that we should be­lieve in wrong things, but I don’t think that’s the case. Here’s the case.

First of all, you should use a safety mechanism when deal­ing with fake frame­works: sand­box­ing. This means hold­ing the be­lief in a sep­a­rate place where it doesn’t prop­a­gate.

This is why I talk about wear­ing a “Peter­son mask”, or hav­ing Peter­son as a voice on your shoulder. The goal is to gen­er­ate an­swers to ques­tions like “What would Peter­son tell me to do here? And how would Scott Alexan­der re­spond?” rather than liter­ally re­plac­ing your own be­liefs with some­one else’s. An­swer­ing those ques­tions does re­quire think­ing as Peter­son for a while, but you can build scaf­fold­ing that pre­vents that mode of think­ing from tak­ing over.

But sand­box­ing is sec­ondary to the main point of fake frame­works: they’re not about be­liev­ing new things, they’re about un-be­liev­ing things.

A lot of fake frame­works deal with the be­hav­ior of large num­bers of peo­ple: co­or­di­na­tion prob­lems are an an­cient hun­gry de­mon, the so­cial web forces peo­ple into play­ing roles, Face­book is out to get you. In what sense is Face­book out to get you? Face­book is thou­sands of em­ploy­ees and mil­lions of share­hold­ers pur­su­ing their own in­ter­est, not a unified agent with de­sires.

But nei­ther is a per­son.

Peo­ple minds are made up of a mul­ti­tude of in­de­pen­dent pro­cesses, con­scious and un­con­scious, each in­fluenc­ing our in­ter­ac­tions with the world. Our sin­gle-minded pur­suit of ge­netic fit­ness has shat­tered into a thou­sand shards of de­sire. In­so­far as we have strate­gic goals such as be­ing out to get some­one, we are con­stantly dis­tracted from them and con­stantly chang­ing them.

The in­sight of fake frame­works is that ev­ery frame­work you use is fake, es­pe­cially when talk­ing about com­pli­cated things like peo­ple and so­cieties. “So­ciety” and “per­son” them­selves aren’t on­tolog­i­cally ba­sic en­tities, just use­ful ab­strac­tions. Use­ful, but not 100% true.

And yet, you have to in­ter­act with peo­ple and so­cieties ev­ery day. You can’t do it with­out some frame­work of think­ing about peo­ple; a cock­tail party isn’t nav­i­gable on the level of quarks or molecules or cells. You have to see hu­man in­ter­ac­tion through one pair of glasses or an­other. The glasses you look through im­pose some mean­ing on the raw data of mov­ing shapes and mouth sounds, but that mean­ing is “fake”: it’s part of the map, not the ter­ri­tory.

Once you re­al­ize that you’re wear­ing glasses, it’s hard to for­get that fact. You can now safely take the glasses off and re­place them with an­other pair, with­out con­fus­ing what you see through the lenses with what ex­ists on a fun­da­men­tal level. The pro­cess is grad­ual, peel­ing away layer af­ter layer of im­mutable facts that turned out to be in­ter­pre­ta­tions. Every time a layer is peeled away, you have more free­dom to play with new frame­works of in­ter­pre­ta­tion to re­place it.

If you can stand one more vi­sual-based metaphor, the skill of re­mov­ing glass is also called Look­ing. This art is hard and long and I’m only a novice in it, but I have a gen­eral sense of the di­rec­tion of progress. There seems to be a gen­er­al­iz­able skill of Look­ing and play­ing with frame­works, as well as do­main-spe­cific un­der­stand­ing that is re­quired for Look­ing in differ­ent con­texts. Deep cu­ri­os­ity is needed, and also re­lin­quish­ment. It of­ten takes an oblique ap­proach rather than brute force. For ex­am­ple, peo­ple re­port the illu­sion of a co­her­ent “self” be­ing dis­pel­led by such varied meth­ods as med­i­ta­tion, fal­ling in love, tak­ing LSD, and study­ing philos­o­phy.

Fi­nally, while I can’t claim the benefits that oth­ers can, I think that Look­ing offers real pro­tec­tion against be­ing in­fected with wrong be­liefs. Look­ing is in­ter­nal­iz­ing that some of your be­liefs about the world are ac­tu­ally in­ter­pre­ta­tions you im­pose on it. False in­ter­pre­ta­tions are much eas­ier to crit­i­cally ex­am­ine and de­tach from than false be­liefs. You end up be­liev­ing fewer wrong things about the world sim­ply be­cause you be­lieve fewer things about the world.

And if Look­ing seems be­yond reach, be­liev­ing fewer wrong things is always a good place to start.

No nominations.
No reviews.