The Useful Idea of Truth

(This is the first post of a new Se­quence, Highly Ad­vanced Episte­mol­ogy 101 for Begin­ners, set­ting up the Se­quence Open Prob­lems in Friendly AI. For ex­pe­rienced read­ers, this first post may seem some­what el­e­men­tary; but it serves as a ba­sis for what fol­lows. And though it may be con­ven­tional in stan­dard philos­o­phy, the world at large does not know it, and it is use­ful to know a com­pact ex­pla­na­tion. Ku­dos to Alex Al­tair for helping in the pro­duc­tion and edit­ing of this post and Se­quence!)


I re­mem­ber this pa­per I wrote on ex­is­ten­tial­ism. My teacher gave it back with an F. She’d un­der­lined true and truth wher­ever it ap­peared in the es­say, prob­a­bly about twenty times, with a ques­tion mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan

I un­der­stand what it means for a hy­poth­e­sis to be el­e­gant, or falsifi­able, or com­pat­i­ble with the ev­i­dence. It sounds to me like call­ing a be­lief ‘true’ or ‘real’ or ‘ac­tual’ is merely the differ­ence be­tween say­ing you be­lieve some­thing, and say­ing you re­ally re­ally be­lieve some­thing.
-- Dale Carrico

What then is truth? A mov­able host of metaphors, metonymies, and; an­thro­po­mor­phisms: in short, a sum of hu­man re­la­tions which have been po­et­i­cally and rhetor­i­cally in­ten­sified, trans­ferred, and em­bel­lished, and which, af­ter long us­age, seem to a peo­ple to be fixed, canon­i­cal, and bind­ing.
-- Friedrich Nietzche


The Sally-Anne False-Belief task is an ex­per­i­ment used to tell whether a child un­der­stands the differ­ence be­tween be­lief and re­al­ity. It goes as fol­lows:

  1. The child sees Sally hide a mar­ble in­side a cov­ered bas­ket, as Anne looks on.

  2. Sally leaves the room, and Anne takes the mar­ble out of the bas­ket and hides it in­side a lidded box.

  3. Anne leaves the room, and Sally re­turns.

  4. The ex­per­i­menter asks the child where Sally will look for her mar­ble.

Chil­dren un­der the age of four say that Sally will look for her mar­ble in­side the box. Chil­dren over the age of four say that Sally will look for her mar­ble in­side the bas­ket.

(At­tributed to: Baron-Co­hen, S., Les­lie, L. and Frith, U. (1985) ‘Does the autis­tic child have a “the­ory of mind”?’, Cog­ni­tion, vol. 21, pp. 37–46.)

Hu­man chil­dren over the age of (typ­i­cally) four, first be­gin to un­der­stand what it means for Sally to lose her mar­bles—for Sally’s be­liefs to stop cor­re­spond­ing to re­al­ity. A three-year-old has a model only of where the mar­ble is. A four-year old is de­vel­op­ing a the­ory of mind; they sep­a­rately model where the mar­ble is and where Sally be­lieves the mar­ble is, so they can no­tice when the two con­flict—when Sally has a false be­lief.

Any mean­ingful be­lief has a truth-con­di­tion, some way re­al­ity can be which can make that be­lief true, or al­ter­na­tively false. If Sally’s brain holds a men­tal image of a mar­ble in­side the bas­ket, then, in re­al­ity it­self, the mar­ble can ac­tu­ally be in­side the bas­ket—in which case Sally’s be­lief is called ‘true’, since re­al­ity falls in­side its truth-con­di­tion. Or al­ter­na­tively, Anne may have taken out the mar­ble and hid­den it in the box, in which case Sally’s be­lief is termed ‘false’, since re­al­ity falls out­side the be­lief’s truth-con­di­tion.

The math­e­mat­i­cian Alfred Tarski once de­scribed the no­tion of ‘truth’ via an in­finite fam­ily of truth-con­di­tions:

  • The sen­tence ‘snow is white’ is true if and only if snow is white.

  • The sen­tence ‘the sky is blue’ is true if and only if the sky is blue.

When you write it out that way, it looks like the dis­tinc­tion might be triv­ial—in­deed, why bother talk­ing about sen­tences at all, if the sen­tence looks so much like re­al­ity when both are writ­ten out as English?

But when we go back to the Sally-Anne task, the differ­ence looks much clearer: Sally’s be­lief is em­bod­ied in a pat­tern of neu­rons and neu­ral firings in­side Sally’s brain, three pounds of wet and ex­tremely com­pli­cated tis­sue in­side Sally’s skull. The mar­ble it­self is a small sim­ple plas­tic sphere, mov­ing be­tween the bas­ket and the box. When we com­pare Sally’s be­lief to the mar­ble, we are com­par­ing two quite differ­ent things.

(Then why talk about these ab­stract ‘sen­tences’ in­stead of just neu­rally em­bod­ied be­liefs? Maybe Sally and Fred be­lieve “the same thing”, i.e., their brains both have in­ter­nal mod­els of the mar­ble in­side the bas­ket—two brain-bound be­liefs with the same truth con­di­tion—in which case the thing these two be­liefs have in com­mon, the shared truth con­di­tion, is ab­stracted into the form of a sen­tence or propo­si­tion that we imag­ine be­ing true or false apart from any brains that be­lieve it.)

Some pun­dits have pan­icked over the point that any judg­ment of truth—any com­par­i­son of be­lief to re­al­ity—takes place in­side some par­tic­u­lar per­son’s mind; and in­deed seems to just com­pare some­one else’s be­lief to your be­lief:

So is all this talk of truth just com­par­ing other peo­ple’s be­liefs to our own be­liefs, and try­ing to as­sert priv­ilege? Is the word ‘truth’ just a weapon in a power strug­gle?

For that mat­ter, you can’t even di­rectly com­pare other peo­ple’s be­liefs to our own be­liefs. You can only in­ter­nally com­pare your be­liefs about some­one else’s be­lief to your own be­lief—com­pare your map of their map, to your map of the ter­ri­tory.

Similarly, to say of your own be­liefs, that the be­lief is ‘true’, just means you’re com­par­ing your map of your map, to your map of the ter­ri­tory. Peo­ple usu­ally are not mis­taken about what they them­selves be­lieve—though there are cer­tain ex­cep­tions to this rule—yet nonethe­less, the map of the map is usu­ally ac­cu­rate, i.e., peo­ple are usu­ally right about the ques­tion of what they be­lieve:

And so say­ing ‘I be­lieve the sky is blue, and that’s true!’ typ­i­cally con­veys the same in­for­ma­tion as ‘I be­lieve the sky is blue’ or just say­ing ‘The sky is blue’ - namely, that your men­tal model of the world con­tains a blue sky.

Med­i­ta­tion:

If the above is true, aren’t the post­mod­ernists right? Isn’t all this talk of ‘truth’ just an at­tempt to as­sert the priv­ilege of your own be­liefs over oth­ers, when there’s noth­ing that can ac­tu­ally com­pare a be­lief to re­al­ity it­self, out­side of any­one’s head?

(A ‘med­i­ta­tion’ is a puz­zle that the reader is meant to at­tempt to solve be­fore con­tin­u­ing. It’s my some­what awk­ward at­tempt to re­flect the re­search which shows that you’re much more likely to re­mem­ber a fact or solu­tion if you try to solve the prob­lem your­self be­fore read­ing the solu­tion; suc­ceed or fail, the im­por­tant thing is to have tried first . This also re­flects a prob­lem Michael Vas­sar thinks is oc­cur­ring, which is that since LW posts of­ten sound ob­vi­ous in ret­ro­spect, it’s hard for peo­ple to vi­su­al­ize the diff be­tween ‘be­fore’ and ‘af­ter’; and this diff is also use­ful to have for learn­ing pur­poses. So please try to say your own an­swer to the med­i­ta­tion—ideally whisper­ing it to your­self, or mov­ing your lips as you pre­tend to say it, so as to make sure it’s fully ex­plicit and available for mem­ory—be­fore con­tin­u­ing; and try to con­sciously note the differ­ence be­tween your re­ply and the post’s re­ply, in­clud­ing any ex­tra de­tails pre­sent or miss­ing, with­out try­ing to min­i­mize or max­i­mize the differ­ence.)

...
...
...

Re­ply:

The re­ply I gave to Dale Car­rico—who de­claimed to me that he knew what it meant for a be­lief to be falsifi­able, but not what it meant for be­liefs to be true—was that my be­liefs de­ter­mine my ex­per­i­men­tal pre­dic­tions, but only re­al­ity gets to de­ter­mine my ex­per­i­men­tal re­sults. If I be­lieve very strongly that I can fly, then this be­lief may lead me to step off a cliff, ex­pect­ing to be safe; but only the truth of this be­lief can pos­si­bly save me from plum­met­ing to the ground and end­ing my ex­pe­riences with a splat.

Since my ex­pec­ta­tions some­times con­flict with my sub­se­quent ex­pe­riences, I need differ­ent names for the thin­gies that de­ter­mine my ex­per­i­men­tal pre­dic­tions and the thingy that de­ter­mines my ex­per­i­men­tal re­sults. I call the former thin­gies ‘be­liefs’, and the lat­ter thingy ‘re­al­ity’.

You won’t get a di­rect col­li­sion be­tween be­lief and re­al­ity—or be­tween some­one else’s be­liefs and re­al­ity—by sit­ting in your liv­ing-room with your eyes closed. But the situ­a­tion is differ­ent if you open your eyes!

Con­sider how your brain ends up know­ing that its shoelaces are un­tied:

  • A pho­ton de­parts from the Sun, and flies to the Earth and through Earth’s at­mo­sphere.

  • Your shoelace ab­sorbs and re-emits the pho­ton.

  • The re­flected pho­ton passes through your eye’s pupil and to­ward your retina.

  • The pho­ton strikes a rod cell or cone cell, or to be more pre­cise, it strikes a pho­tore­cep­tor, a form of vi­tamin-A known as reti­nal, which un­der­goes a change in its molec­u­lar shape (ro­tat­ing around a dou­ble bond) pow­ered by ab­sorp­tion of the pho­ton’s en­ergy. A bound pro­tein called an opsin un­der­goes a con­for­ma­tional change in re­sponse, and this fur­ther prop­a­gates to a neu­ral cell body which pumps a pro­ton and in­creases its po­lariza­tion.

  • The grad­ual po­lariza­tion change is prop­a­gated to a bipo­lar cell and then a gan­glion cell. If the gan­glion cell’s po­lariza­tion goes over a thresh­old, it sends out a nerve im­pulse, a prop­a­gat­ing elec­tro­chem­i­cal phe­nomenon of po­lariza­tion-de­po­lariza­tion that trav­els through the brain at be­tween 1 and 100 me­ters per sec­ond. Now the in­com­ing light from the out­side world has been trans­duced to neu­ral in­for­ma­tion, com­men­su­rate with the sub­strate of other thoughts.

  • The neu­ral sig­nal is pre­pro­cessed by other neu­rons in the retina, fur­ther pre­pro­cessed by the lat­eral genicu­late nu­cleus in the mid­dle of the brain, and then, in the vi­sual cor­tex lo­cated at the back of your head, re­con­structed into an ac­tual lit­tle tiny pic­ture of the sur­round­ing world—a pic­ture em­bod­ied in the firing fre­quen­cies of the neu­rons mak­ing up the vi­sual field. (A dis­torted pic­ture, since the cen­ter of the vi­sual field is pro­cessed in much greater de­tail—i.e. spread across more neu­rons and more cor­ti­cal area—than the edges.)

  • In­for­ma­tion from the vi­sual cor­tex is then routed to the tem­po­ral lobes, which han­dle ob­ject recog­ni­tion.

  • Your brain rec­og­nizes the form of an un­tied shoelace.

And so your brain up­dates its map of the world to in­clude the fact that your shoelaces are un­tied. Even if, pre­vi­ously, it ex­pected them to be tied! There’s no rea­son for your brain not to up­date if poli­tics aren’t in­volved. Once pho­tons head­ing into the eye are turned into neu­ral firings, they’re com­men­su­rate with other mind-in­for­ma­tion and can be com­pared to pre­vi­ous be­liefs.

Belief and re­al­ity in­ter­act all the time. If the en­vi­ron­ment and the brain never touched in any way, we wouldn’t need eyes—or hands—and the brain could af­ford to be a whole lot sim­pler. In fact, or­ganisms wouldn’t need brains at all.

So, fine, be­lief and re­al­ity are dis­tinct en­tities which do in­ter­sect and in­ter­act. But to say that we need sep­a­rate con­cepts for ‘be­liefs’ and ‘re­al­ity’ doesn’t get us to need­ing the con­cept of ‘truth’, a com­par­i­son be­tween them. Maybe we can just sep­a­rately (a) talk about an agent’s be­lief that the sky is blue and (b) talk about the sky it­self. In­stead of say­ing, “Jane be­lieves the sky is blue, and she’s right”, we could say, “Jane be­lieves ‘the sky is blue’; also, the sky is blue” and con­vey the same in­for­ma­tion about what (a) we be­lieve about the sky and (b) what we be­lieve Jane be­lieves. We could always ap­ply Tarski’s schema—“The sen­tence ‘X’ is true iff X”—and re­place ev­ery in­stance of alleged truth by talk­ing di­rectly about the truth-con­di­tion, the cor­re­spond­ing state of re­al­ity (i.e. the sky or what­ever). Thus we could elimi­nate that both­er­some word, ‘truth’, which is so con­tro­ver­sial to philoso­phers, and mi­sused by var­i­ous an­noy­ing peo­ple.

Sup­pose you had a ra­tio­nal agent, or for con­crete­ness, an Ar­tifi­cial In­tel­li­gence, which was car­ry­ing out its work in iso­la­tion and cer­tainly never needed to ar­gue poli­tics with any­one. The AI knows that “My model as­signs 90% prob­a­bil­ity that the sky is blue”; it is quite sure that this prob­a­bil­ity is the ex­act state­ment stored in its RAM. Separately, the AI mod­els that “The prob­a­bil­ity that my op­ti­cal sen­sors will de­tect blue out the win­dow is 99%, given that the sky is blue”; and it doesn’t con­fuse this propo­si­tion with the quite differ­ent propo­si­tion that the op­ti­cal sen­sors will de­tect blue when­ever it be­lieves the sky is blue. So the AI can definitely differ­en­ti­ate the map and the ter­ri­tory; it knows that the pos­si­ble states of its RAM stor­age do not have the same con­se­quences and causal pow­ers as the pos­si­ble states of sky.

But does this AI ever need a con­cept for the no­tion of truth in gen­eral—does it ever need to in­vent the word ‘truth’? Why would it work bet­ter if it did?

Med­i­ta­tion: If we were deal­ing with an Ar­tifi­cial In­tel­li­gence that never had to ar­gue poli­tics with any­one, would it ever need a word or a con­cept for ‘truth’?

...
...
...

Re­ply: The ab­stract con­cept of ‘truth’ - the gen­eral idea of a map-ter­ri­tory cor­re­spon­dence—is re­quired to ex­press ideas such as:

  • Gen­er­al­ized across pos­si­ble maps and pos­si­ble cities, if your map of a city is ac­cu­rate, nav­i­gat­ing ac­cord­ing to that map is more likely to get you to the air­port on time.

  • To draw a true map of a city, some­one has to go out and look at the build­ings; there’s no way you’d end up with an ac­cu­rate map by sit­ting in your liv­ing-room with your eyes closed try­ing to imag­ine what you wish the city would look like.

  • True be­liefs are more likely than false be­liefs to make cor­rect ex­per­i­men­tal pre­dic­tions, so if we in­crease our cre­dence in hy­pothe­ses that make cor­rect ex­per­i­men­tal pre­dic­tions, our model of re­al­ity should be­come in­cre­men­tally more true over time.

This is the main benefit of talk­ing and think­ing about ‘truth’ - that we can gen­er­al­ize rules about how to make maps match ter­ri­to­ries in gen­eral; we can learn les­sons that trans­fer be­yond par­tic­u­lar skies be­ing blue.


Next in main se­quence:

Com­plete philo­soph­i­cal panic has turned out not to be jus­tified (it never is). But there is a key prac­ti­cal prob­lem that re­sults from our in­ter­nal eval­u­a­tion of ‘truth’ be­ing a com­par­i­son of a map of a map, to a map of re­al­ity: On this schema it is very easy for the brain to end up be­liev­ing that a com­pletely mean­ingless state­ment is ‘true’.

Some liter­a­ture pro­fes­sor lec­tures that the fa­mous au­thors Carol, Danny, and Elaine are all ‘post-utopi­ans’, which you can tell be­cause their writ­ings ex­hibit signs of ‘colo­nial aliena­tion’. For most col­lege stu­dents the typ­i­cal re­sult will be that their brain’s ver­sion of an ob­ject-at­tribute list will as­sign the at­tribute ‘post-utopian’ to the au­thors Carol, Danny, and Elaine. When the sub­se­quent test asks for “an ex­am­ple of a post-utopian au­thor”, the stu­dent will write down “Elaine”. What if the stu­dent writes down, “I think Elaine is not a post-utopian”? Then the pro­fes­sor mod­els thusly...

...and marks the an­swer false.

After all...

  • The sen­tence “Elaine is a post-utopian” is true if and only if Elaine is a post-utopian.

...right?

Now of course it could be that this term does mean some­thing (even though I made it up). It might even be that, al­though the pro­fes­sor can’t give a good ex­plicit an­swer to “What is post-utopi­anism, any­way?”, you can nonethe­less take many liter­ary pro­fes­sors and sep­a­rately show them new pieces of writ­ing by un­known au­thors and they’ll all in­de­pen­dently ar­rive at the same an­swer, in which case they’re clearly de­tect­ing some sen­sory-visi­ble fea­ture of the writ­ing. We don’t always know how our brains work, and we don’t always know what we see, and the sky was seen as blue long be­fore the word “blue” was in­vented; for a part of your brain’s world-model to be mean­ingful doesn’t re­quire that you can ex­plain it in words.

On the other hand, it could also be the case that the pro­fes­sor learned about “colo­nial aliena­tion” by mem­o­riz­ing what to say to his pro­fes­sor. It could be that the only per­son whose brain as­signed a real mean­ing to the word is dead. So that by the time the stu­dents are learn­ing that “post-utopian” is the pass­word when hit with the query “colo­nial aliena­tion?”, both phrases are just ver­bal re­sponses to be re­hearsed, noth­ing but an an­swer on a test.

The two phrases don’t feel “dis­con­nected” in­di­vi­d­u­ally be­cause they’re con­nected to each other—post-utopi­anism has the ap­par­ent con­se­quence of colo­nial aliena­tion, and if you ask what colo­nial aliena­tion im­plies, it means the au­thor is prob­a­bly a post-utopian. But if you draw a cir­cle around both phrases, they don’t con­nect to any­thing else. They’re float­ing be­liefs not con­nected with the rest of the model. And yet there’s no in­ter­nal alarm that goes off when this hap­pens. Just as “be­ing wrong feels like be­ing right”—just as hav­ing a false be­lief feels the same in­ter­nally as hav­ing a true be­lief, at least un­til you run an ex­per­i­ment—hav­ing a mean­ingless be­lief can feel just like hav­ing a mean­ingful be­lief.

(You can even have fights over com­pletely mean­ingless be­liefs. If some­one says “Is Elaine a post-utopian?” and one group shouts “Yes!” and the other group shouts “No!”, they can fight over hav­ing shouted differ­ent things; it’s not nec­es­sary for the words to mean any­thing for the bat­tle to get started. Heck, you could have a bat­tle over one group shout­ing “Mun!” and the other shout­ing “Fleem!” More gen­er­ally, it’s im­por­tant to dis­t­in­guish the visi­ble con­se­quences of the pro­fes­sor-brain’s quoted be­lief (stu­dents had bet­ter write down a cer­tain thing on his test, or they’ll be marked wrong) from the propo­si­tion that there’s an un­quoted state of re­al­ity (Elaine ac­tu­ally be­ing a post-utopian in the ter­ri­tory) which has visi­ble con­squences.)

One clas­sic re­sponse to this prob­lem was ver­ifi­ca­tion­ism, which held that the sen­tence “Elaine is a post-utopian” is mean­ingless if it doesn’t tell us which sen­sory ex­pe­riences we should ex­pect to see if the sen­tence is true, and how those ex­pe­riences differ from the case if the sen­tence is false.

But then sup­pose that I trans­mit a pho­ton aimed at the void be­tween galax­ies—head­ing far off into space, away into the night. In an ex­pand­ing uni­verse, this pho­ton will even­tu­ally cross the cos­molog­i­cal hori­zon where, even if the pho­ton hit a mir­ror re­flect­ing it squarely back to­ward Earth, the pho­ton would never get here be­cause the uni­verse would ex­pand too fast in the mean­while. Thus, af­ter the pho­ton goes past a cer­tain point, there are no ex­per­i­men­tal con­se­quences what­so­ever, ever, to the state­ment “The pho­ton con­tinues to ex­ist, rather than blink­ing out of ex­is­tence.”

And yet it seems to me—and I hope to you as well—that the state­ment “The pho­ton sud­denly blinks out of ex­is­tence as soon as we can’t see it, vi­o­lat­ing Con­ser­va­tion of En­ergy and be­hav­ing un­like all pho­tons we can ac­tu­ally see” is false, while the state­ment “The pho­ton con­tinues to ex­ist, head­ing off to nowhere” is true. And this sort of ques­tion can have im­por­tant policy con­se­quences: sup­pose we were think­ing of send­ing off a near-light-speed coloniza­tion ves­sel as far away as pos­si­ble, so that it would be over the cos­molog­i­cal hori­zon be­fore it slowed down to colonize some dis­tant su­per­cluster. If we thought the coloniza­tion ship would just blink out of ex­is­tence be­fore it ar­rived, we wouldn’t bother send­ing it.

It is both use­ful and wise to ask af­ter the sen­sory con­se­quences of our be­liefs. But it’s not quite the fun­da­men­tal defi­ni­tion of mean­ingful state­ments. It’s an ex­cel­lent hint that some­thing might be a dis­con­nected ‘float­ing be­lief’, but it’s not a hard-and-fast rule.

You might next try the an­swer that for a state­ment to be mean­ingful, there must be some way re­al­ity can be which makes the state­ment true or false; and that since the uni­verse is made of atoms, there must be some way to ar­range the atoms in the uni­verse that would make a state­ment true or false. E.g. to make the state­ment “I am in Paris” true, we would have to move the atoms com­pris­ing my­self to Paris. A liter­a­teur claims that Elaine has an at­tribute called post-utopi­anism, but there’s no way to trans­late this claim into a way to ar­range the atoms in the uni­verse so as to make the claim true, or al­ter­na­tively false; so it has no truth-con­di­tion, and must be mean­ingless.

In­deed there are claims where, if you pause and ask, “How could a uni­verse be ar­ranged so as to make this claim true, or al­ter­na­tively false?”, you’ll sud­denly re­al­ize that you didn’t have as strong a grasp on the claim’s truth-con­di­tion as you be­lieved. “Suffer­ing builds char­ac­ter”, say, or “All de­pres­sions re­sult from bad mon­e­tary policy.” Th­ese claims aren’t nec­es­sar­ily mean­ingless, but they’re a lot eas­ier to say, than to vi­su­al­ize the uni­verse that makes them true or false. Just like ask­ing af­ter sen­sory con­se­quences is an im­por­tant hint to mean­ing or mean­ingless­ness, so is ask­ing how to con­figure the uni­verse.

But if you say there has to be some ar­range­ment of atoms that makes a mean­ingful claim true or false...

Then the the­ory of quan­tum me­chan­ics would be mean­ingless a pri­ori, be­cause there’s no way to ar­range atoms to make the the­ory of quan­tum me­chan­ics true.

And when we dis­cov­ered that the uni­verse was not made of atoms, but rather quan­tum fields, all mean­ingful state­ments ev­ery­where would have been re­vealed as false—since there’d be no atoms ar­ranged to fulfill their truth-con­di­tions.

Med­i­ta­tion: What rule could re­strict our be­liefs to just propo­si­tions that can be mean­ingful, with­out ex­clud­ing a pri­ori any­thing that could in prin­ci­ple be true?


  • Med­i­ta­tion An­swers - (A cen­tral com­ment for read­ers who want to try an­swer­ing the above med­i­ta­tion (be­fore read­ing what­ever post in the Se­quence an­swers it) or read con­tributed an­swers.)

  • Main­stream Sta­tus - (A cen­tral com­ment where I say what I think the sta­tus of the post is rel­a­tive to main­stream mod­ern episte­mol­ogy or other fields, and peo­ple can post sum­maries or ex­cerpts of any pa­pers they think are rele­vant.)

Part of the se­quence Highly Ad­vanced Episte­mol­ogy 101 for Beginners

Next post: “Skill: The Map is Not the Ter­ri­tory