Skill: The Map is Not the Territory

Fol­lowup to: The Use­ful Idea of Truth (minor post)

So far as I know, the first piece of ra­tio­nal­ist fic­tion—one of only two ex­plic­itly ra­tio­nal­ist fic­tions I know of that didn’t de­scend from HPMOR, the other be­ing “David’s Sling” by Marc Stiegler—is the Null-A se­ries by A. E. van Vogt. In Vogt’s story, the pro­tag­o­nist, Gilbert Gosseyn, has mostly non-du­pli­ca­ble abil­ities that you can’t pick up and use even if they’re sup­pos­edly men­tal—e.g. the abil­ity to use all of his mus­cu­lar strength in emer­gen­cies, thanks to his alleged train­ing. The main ex­plicit-ra­tio­nal­ist skill some­one could ac­tu­ally pick up from Gosseyn’s ad­ven­ture is em­bod­ied in his slo­gan:

“The map is not the ter­ri­tory.”

Some­times it still amazes me to con­tem­plate that this proverb was in­vented at some point, and some fel­low named Korzyb­ski in­vented it, and this hap­pened as late as the 20th cen­tury. I read Vogt’s story and ab­sorbed that les­son when I was rather young, so to me this phrase sounds like a sheer back­ground ax­iom of ex­is­tence.

But as the Bayesian Con­spir­acy en­ters into its sec­ond stage of de­vel­op­ment, we must all ac­cus­tom our­selves to trans­lat­ing mere in­sights into ap­plied tech­niques. So:

Med­i­ta­tion: Un­der what cir­cum­stances is it helpful to con­sciously think of the dis­tinc­tion be­tween the map and the ter­ri­tory—to vi­su­al­ize your thought bub­ble con­tain­ing a be­lief, and a re­al­ity out­side it, rather than just us­ing your map to think about re­al­ity di­rectly? How ex­actly does it help, on what sort of prob­lem?

...

...

...

Skill 1: The con­ceiv­abil­ity of be­ing wrong.

In the story, Gilbert Gosseyn is most li­able to be re­minded of this proverb when some be­lief is un­cer­tain; “Your be­lief in that does not make it so.” It might sound ba­sic, but this is where some of the ear­liest ra­tio­nal­ist train­ing starts—mak­ing the jump from liv­ing in a world where the sky just is blue, the grass just is green, and peo­ple from the Other Poli­ti­cal Party just are pos­sessed by de­monic spirits of pure evil, to a world where it’s pos­si­ble that re­al­ity is go­ing to be differ­ent from these be­liefs and come back and sur­prise you. You might as­sign low prob­a­bil­ity to that in the grass-is-green case, but in a world where there’s a ter­ri­tory sep­a­rate from the map it is at least con­ceiv­able that re­al­ity turns out to dis­agree with you. There are peo­ple who could stand to re­hearse this, maybe by vi­su­al­iz­ing them­selves with a thought bub­ble, first in a world like X, then in a world like not-X, in cases where they are tempted to en­tirely ne­glect the pos­si­bil­ity that they might be wrong. “He hates me!” and other be­liefs about other peo­ple’s mo­tives seems to be a do­main in which “I be­lieve that he hates me” or “I hy­poth­e­size that he hates me” might work a lot bet­ter.

Prob­a­bil­is­tic rea­son­ing is also a rem­edy for similar rea­sons: Im­plicit in a 75% prob­a­bil­ity of X is a 25% prob­a­bil­ity of not-X, so you’re hope­fully au­to­mat­i­cally con­sid­er­ing more than one world. As­sign­ing a prob­a­bil­ity also in­her­ently re­minds you that you’re oc­cu­py­ing an epistemic state, since only be­liefs can be prob­a­bil­is­tic, while re­al­ity it­self is ei­ther one way or an­other.

Skill 2: Per­spec­tive-tak­ing on be­liefs.

What we re­ally be­lieve feels like the way the world is; from the in­side, other peo­ple feel like they are in­hab­it­ing differ­ent wor­lds from you. They aren’t dis­agree­ing with you be­cause they’re ob­sti­nate, they’re dis­agree­ing be­cause the world feels differ­ent to them—even if the two of you are in fact em­bed­ded in the same re­al­ity.

This is one of the se­cret writ­ing rules be­hind Harry Pot­ter and the Meth­ods of Ra­tion­al­ity. When I write a char­ac­ter, e.g. Draco Malfoy, I don’t just ex­trap­o­late their mind, I ex­trap­o­late the sur­round­ing sub­jec­tive world they live in, which has that char­ac­ter at the cen­ter; all other things seem im­por­tant, or are con­sid­ered at all, in re­la­tion to how im­por­tant they are to that char­ac­ter. Most other books are never told from more than one char­ac­ter’s view­point, but if they are, it’s strange how of­ten the other char­ac­ters seem to be liv­ing in­side the pro­tag­o­nist’s uni­verse and to think mostly about things that are im­por­tant to the main pro­tag­o­nist. In HPMOR, when you en­ter Draco Malfoy’s view­point, you are plunged into Draco Malfoy’s sub­jec­tive uni­verse, in which Death Eaters have rea­sons for ev­ery­thing they do and Dum­ble­dore is an ex­oge­nous rea­son­less evil. Since I’m not try­ing to show off post­mod­ernism, ev­ery­one is still rec­og­niz­ably liv­ing in the same un­der­ly­ing re­al­ity, and the jus­tifi­ca­tions of the Death Eaters only sound rea­son­able to Draco, rather than hav­ing been op­ti­mized to per­suade the reader. It’s not like the char­ac­ters liter­ally have their own uni­verses, nor is moral­ity handed out in equal por­tions to all par­ties re­gard­less of what they do. But differ­ent el­e­ments of re­al­ity have differ­ent mean­ings and differ­ent im­por­tances to differ­ent char­ac­ters.

Joshua Greene has ob­served—I think this is in his Ter­rible, Hor­rible, No Good, Very Bad pa­per—that most poli­ti­cal dis­course rarely gets be­yond the point of lec­tur­ing naughty chil­dren who are just re­fus­ing to ac­knowl­edge the ev­i­dent truth. As a spe­cial case, one may also ap­pre­ci­ate in­ter­nally that be­ing wrong feels just like be­ing right, un­less you can ac­tu­ally perform some sort of ex­per­i­men­tal check.

Skill 3: You are less bam­boo­zle­able by anti-episte­mol­ogy or mo­ti­vated neu­tral­ity which ex­plic­itly claims that there’s no truth.

This is a nega­tive skill—avoid­ing one more wrong way to do it—and mostly about quoted ar­gu­ments rather than pos­i­tive rea­son­ing you’d want to con­duct your­self. Hence the sort of thing we want to put less em­pha­sis on in train­ing. Nonethe­less, it’s eas­ier not to fall for some­body’s line about the ab­sence of ob­jec­tive truth, if you’ve pre­vi­ously spent a bit of time vi­su­al­iz­ing Sally and Anne with differ­ent be­liefs, and sep­a­rately, a mar­ble for those be­liefs to be com­pared-to. Sally and Anne have differ­ent be­liefs, but there’s only one way-things-are, the ac­tual state of the mar­ble, to which the be­liefs can be com­pared; so no, they don’t have ‘differ­ent truths’. A real be­lief (as op­posed to a be­lief-in-be­lief) will feel true, yes, so the two have differ­ent feel­ings-of-truth, but the feel­ing-of-truth is not the ter­ri­tory.

To re­hearse this, I sup­pose, you’d try to no­tice this kind of anti-episte­mol­ogy when you ran across it, and maybe re­spond in­ter­nally by ac­tu­ally vi­su­al­iz­ing two figures with thought bub­bles and their sin­gle en­vi­ron­ment. Though I don’t think most peo­ple who un­der­stood the core in­sight would re­quire any fur­ther per­sua­sion or re­hearsal to avoid con­tam­i­na­tion by the fal­lacy.

Skill 4: World-first rea­son­ing about de­ci­sions a.k.a. the Tarski Method aka Li­tany of Tarski.

Sup­pose you’re con­sid­er­ing whether to wash your white ath­letic socks with a dark load of laun­dry, and you’re wor­ried the col­ors might bleed into the socks, but on the other hand you re­ally don’t want to have to do an­other load just for the white socks. You might find your brain se­lec­tively ra­tio­nal­iz­ing rea­sons why it’s not all that likely for the col­ors to bleed—there’s no re­ally new dark clothes in there, say—try­ing to per­suade it­self that the socks won’t be ru­ined. At which point it may help to say:

“If my socks will stain, I want to be­lieve my socks will stain;
If my socks won’t stain, I don’t want to be­lieve my socks will stain;
Let me not be­come at­tached to be­liefs I may not want.”

To stop your brain try­ing to per­suade it­self, vi­su­al­ize that you are ei­ther already in the world where your socks will end up dis­col­ored, or already in the world where your socks will be fine, and in ei­ther case it is bet­ter for you to be­lieve you’re in the world you’re ac­tu­ally in. Re­lated mantras in­clude “That which can be de­stroyed by the truth should be” and “Real­ity is that which, when we stop be­liev­ing in it, doesn’t go away”. Ap­pre­ci­at­ing that be­lief is not re­al­ity can help us to ap­pre­ci­ate the pri­macy of re­al­ity, and ei­ther stop ar­gu­ing with it and ac­cept it, or ac­tu­ally be­come cu­ri­ous about it.

Anna Sala­mon and I usu­ally ap­ply the Tarski Method by vi­su­al­iz­ing a world that is not-how-we’d-like or not-how-we-pre­vi­ously-be­lieved, and our­selves as be­liev­ing the con­trary, and the dis­aster that would then fol­low. For ex­am­ple, let’s say that you’ve been driv­ing for a while, haven’t reached your ho­tel, and are start­ing to won­der if you took a wrong turn… in which case you’d have to go back and drive an­other 40 miles in the op­po­site di­rec­tion, which is an un­pleas­ant thing to think about, so your brain tries to per­suade it­self that it’s not lost. Anna and I use the form of the skill where we vi­su­al­ize the world where we are lost and keep driv­ing.

Note that in prin­ci­ple, this is only one quad­rant of a 2 x 2 ma­trix:

In re­al­ity, you’re head­ing in the right di­rec­tion In re­al­ity, you’re to­tally lost
You be­lieve you’re head­ing in the right di­rec­tion No need to change any­thing—just keep do­ing what you’re do­ing, and you’ll get to the con­fer­ence ho­tel Just keep do­ing what you’re do­ing, and you’ll even­tu­ally drive your rental car di­rectly into the sea
You be­lieve you’re lost Alas! You spend 5 whole min­utes of your life pul­ling over and ask­ing for di­rec­tions you didn’t need After spend­ing 5 min­utes get­ting di­rec­tions, you’ve got to turn around and drive 40 min­utes the other way.

Michael “Valen­tine” Smith says that he prac­ticed this skill by ac­tu­ally vi­su­al­iz­ing all four quad­rants in turn, and that with a bit of prac­tice he could do it very quickly, and that he thinks vi­su­al­iz­ing all four quad­rants helped.

(Main­stream sta­tus here.)

Part of the se­quence Highly Ad­vanced Episte­mol­ogy 101 for Beginners

Next post: “Ra­tion­al­ity: Ap­pre­ci­at­ing Cog­ni­tive Al­gorithms

Pre­vi­ous post: “The Use­ful Idea of Truth