Why I Reject the Correspondence Theory of Truth

This post be­gan life as a com­ment re­spond­ing to Peer Gynt’s re­quest for a steel­man of non-cor­re­spon­dence views of truth. It ended up be­ing far too long for a com­ment, so I’ve de­cided to make it a sep­a­rate post. How­ever, it might have the ram­bly qual­ity of a long com­ment rather than a fully planned out post.

Eval­u­at­ing Models

Let’s say I’m pre­sented with a model and I’m won­der­ing whether I should in­cor­po­rate it into my be­lief-set. There are sev­eral differ­ent ways I could go about eval­u­at­ing the model, but for now let’s fo­cus on two. The first is prag­matic. I could ask how use­ful the model would be for achiev­ing my goals. Of course, this crite­rion of eval­u­a­tion de­pends cru­cially on what my goals ac­tu­ally are. It must also take into ac­count sev­eral other fac­tors, in­clud­ing my cog­ni­tive abil­ities (per­haps I am bet­ter at work­ing with vi­sual rather than ver­bal mod­els) and the effec­tive­ness of al­ter­na­tive mod­els available to me. So if my job is de­sign­ing can­nons, per­haps New­to­nian me­chan­ics is a bet­ter model than rel­a­tivity, since the calcu­la­tions are eas­ier and there is no sig­nifi­cant differ­ence in the effi­cacy of the tech­nol­ogy I would cre­ate us­ing ei­ther model cor­rectly. On the other hand, if my job is de­sign­ing GPS sys­tems, rel­a­tivity might be a bet­ter model, with the in­creased difficulty of calcu­la­tions be­ing com­pen­sated by a sig­nifi­cant im­prove­ment in effec­tive­ness. If I de­sign both can­nons and GPS sys­tems, then which model is bet­ter will vary with con­text.

Another mode of eval­u­a­tion is cor­re­spon­dence with re­al­ity, the ex­tent to which the model ac­cu­rately rep­re­sents its do­main. In this case, you don’t have much of the con­text-sen­si­tivity that’s as­so­ci­ated with prag­matic eval­u­a­tion. New­to­nian me­chan­ics may be more effec­tive than the the­ory of rel­a­tivity at achiev­ing cer­tain goals, but (con­ven­tional wis­dom says) rel­a­tivity is nonethe­less a more ac­cu­rate rep­re­sen­ta­tion of the world. If the can­non maker be­lieves in New­to­nian me­chan­ics, his be­liefs don’t cor­re­spond with the world as well as they should. Ac­cord­ing to cor­re­spon­dence the­o­rists, it is this mode of eval­u­a­tion that is rele­vant when we’re in­ter­ested in truth. We want to know how well a model mimics re­al­ity, not how use­ful it is.

I’m sure most cor­re­spon­dence the­o­rists would say that the use­ful­ness of a model is linked to its truth. One ma­jor rea­son why cer­tain mod­els work bet­ter than oth­ers is that they are bet­ter rep­re­sen­ta­tions of the ter­ri­tory. But these two mo­ti­va­tions can come apart. It may be the case that in cer­tain con­texts a less ac­cu­rate the­ory is more use­ful or effec­tive for achiev­ing cer­tain goals than a more ac­cu­rate the­ory. So, ac­cord­ing to a cor­re­spon­dence the­o­rist, figur­ing out which model is most effec­tive in a given con­text is not the same thing as figur­ing out which model is true.

How do we go about these two modes of eval­u­a­tion? Well, eval­u­a­tion of the prag­matic suc­cess of a model is pretty easy. Say I want to figure out which of sev­eral mod­els will best serve the pur­pose of keep­ing me al­ive for the next 30 days. I can ran­domly di­vide my army of grad­u­ate stu­dents into sev­eral groups, force each group to be­have ac­cord­ing to the dic­tates of a sep­a­rate model, and then check which group has the high­est num­ber of sur­vivors af­ter 30 days. Some­thing like that, at least.

But how do I eval­u­ate whether a model cor­re­sponds with re­al­ity? The first step would pre­sum­able in­volve es­tab­lish­ing cor­re­spon­dences be­tween parts of my model and parts of the world. For ex­am­ple, I could say “Let mS in my model rep­re­sent the mass of the Sun.” Then I check to see if the struc­tural re­la­tions be­tween the bits of my model match the struc­tural re­la­tions be­tween the cor­re­spond­ing bits of the world. Sounds sim­ple enough, right? Not so fast! The pro­ce­dure de­scribed above re­lies on be­ing able to es­tab­lish (ei­ther by stipu­la­tion or dis­cov­ery) re­la­tions be­tween the model and re­al­ity. That pre­sup­poses that we have ac­cess to both the model and to re­al­ity, in or­der to cor­re­late the two. In what sense do we have “ac­cess” to re­al­ity, though? How do I di­rectly cor­re­late a piece of re­al­ity with a piece of my model?

Models and Reality

Our ac­cess to the ex­ter­nal world is en­tirely me­di­ated by mod­els, ei­ther mod­els that we con­sciously con­struct (like quan­tum field the­ory) or mod­els that our brains build un­con­sciously (like the model of my im­me­di­ate en­vi­ron­ment pro­duced in my vi­sual cor­tex). There is no such thing as pure, un­medi­ated, model-free ac­cess to re­al­ity. But we of­ten do talk about com­par­ing our mod­els to re­al­ity. What’s go­ing on here? Wouldn’t such a com­par­i­son re­quire us to have ac­cess to re­al­ity in­de­pen­dent of the mod­els? Well, if you think about it, when­ever we claim to be com­par­ing a model to re­al­ity, we’re re­ally com­par­ing one model to an­other model. It’s just that we’re treat­ing the sec­ond model as trans­par­ent, as an un­con­tro­ver­sial proxy for re­al­ity in that con­text. Those last three words mat­ter: A model that is used as a crite­rion for re­al­ity in one in­ves­tiga­tive con­text might be re­garded as con­tro­ver­sial—as ex­plic­itly a model of re­al­ity rather than re­al­ity it­self—in an­other con­text.

Let’s say I’m com­par­ing a draw­ing of a per­son to the ac­tual per­son. When I say things like “The draw­ing has a scar on the left side of the face, but in re­al­ity the scar is on the right side”, I’m us­ing the de­liv­er­ances of vi­sual per­cep­tion as my crite­rion for “re­al­ity”. But in an­other con­text, say if I’m talk­ing about the psy­chol­ogy of per­cep­tion, I’d talk about my per­cep­tual model as com­pared (and, there­fore, con­trasted) to re­al­ity. In this case my crite­rion for re­al­ity will be some­thing other than per­cep­tion, say the read­ings from some sort of sci­en­tific in­stru­ment. So we could say things like, “Sub­jects per­ceive these two col­ors as the same, but in re­al­ity they are not.” But by “re­al­ity” here we mean some­thing like “the model of the sys­tem gen­er­ated by in­stru­ments that mea­sure sur­face re­flec­tance prop­er­ties, which in turn are built based on widely ac­cepted sci­en­tific mod­els of op­ti­cal phe­nom­ena”.

When we or­di­nar­ily talk about cor­re­spon­dence be­tween mod­els and re­al­ity, we’re re­ally talk­ing about the cor­re­spon­dence be­tween bits of one model and bits of an­other model. The cor­re­spon­dence the­ory of truth, how­ever, de­scribes truth as a cor­re­spon­dence re­la­tion be­tween a model and the world it­self. Not an­other model of the world, the world. And that, I con­tend, is im­pos­si­ble. We do not have di­rect ac­cess to the world. When I say “Let mS rep­re­sent the mass of the Sun”, what I’m re­ally do­ing is cor­re­lat­ing a math­e­mat­i­cal model with a ver­bal model, not with im­me­di­ate re­al­ity. Even if some­one asks me “What’s the Sun?”, and I point at the big light in the sky, all I’m do­ing is cor­re­lat­ing a ver­bal model with my vi­sual model (a vi­sual model which I’m fairly con­fi­dent is ex­tremely similar, though not ex­actly the same, as the vi­sual model of my in­ter­locu­tor). De­scribing cor­re­spon­dence as a re­la­tion­ship be­tween mod­els and the world, rather than a re­la­tion­ship be­tween mod­els and other mod­els, is a cat­e­gory er­ror.

So I can go about the pro­ce­dure of es­tab­lish­ing cor­re­spon­dences all I want, cor­re­lat­ing one model with an­other. All this will ul­ti­mately get me is co­her­ence. If all my mod­els cor­re­spond with one an­other, then I know that there is no con­flict be­tween my differ­ent mod­els. My the­o­ret­i­cal model co­heres with my vi­sual model, which co­heres with my au­di­tory model, and so on. Some philoso­phers have been con­tent to rest here, de­cid­ing that co­her­ence is all there is to truth. If the de­liv­er­ances of my sci­en­tific mod­els match up with the de­liv­er­ances of my per­cep­tual mod­els perfectly, I can say they are true. But there is some­thing very un­satis­fac­tory about this stance. The world has just dis­ap­peared. Truth, if it is any­thing at all, in­volves both our mod­els and the world. How­ever, the world doesn’t fea­ture in the co­her­ence con­cep­tion of truth. I could be float­ing in a void, hal­lu­ci­nat­ing var­i­ous mod­els that hap­pen to co­here with one an­other perfectly, and I would have at­tained the truth. That can’t be right.

Cor­re­spon­dence Can’t Be Causal

The cor­re­spon­dence the­o­rist may ob­ject that I’ve stacked the deck by re­quiring that one con­sciously es­tab­lish cor­re­la­tions be­tween mod­els and the world. The cor­re­spon­dence isn’t a product of stipu­la­tion or dis­cov­ery, it’s a product of ba­sic causal con­nec­tions be­tween the world and my brain. This seems to be Eliezer’s view. Cor­re­spon­dence re­la­tions are causal re­la­tions. My model of the Sun cor­re­sponds with the be­hav­ior of the ac­tual Sun, out there in the real world, be­cause my model was pro­duced by causal in­ter­ac­tions be­tween the ac­tual Sun and my brain.

But I don’t think this ma­neu­ver can save the cor­re­spon­dence the­ory. The cor­re­spon­dence the­ory bases truth on a rep­re­sen­ta­tional re­la­tion­ship be­tween mod­els/​be­liefs and the world. A model is true if it ac­cu­rately rep­re­sents its do­main. Rep­re­sen­ta­tion is a nor­ma­tive re­la­tion­ship. Cau­sa­tion is not. What I mean by this is that rep­re­sen­ta­tion has cor­rect­ness con­di­tions. You can mean­ingfully say “That’s a good rep­re­sen­ta­tion” or “That’s a bad rep­re­sen­ta­tion”. There is no ana­log with cau­sa­tion. There’s no sense in which some par­tic­u­lar pu­ta­tively causal re­la­tion ends up be­ing a “bad” causal re­la­tion. Ptolemy’s be­liefs about the Sun’s mo­tion were causally en­tan­gled with the Sun, yet we don’t want to say that those be­liefs are ac­cu­rate. It seems mere causal en­tan­gle­ment is in­suffi­cient. We need to dis­t­in­guish be­tween the right sort of causal en­tan­gle­ment (the sort that gets you an ac­cu­rate pic­ture of the world) and the wrong sort. But figur­ing out this dis­tinc­tion takes us back to the origi­nal prob­lem. If we only have im­me­di­ate ac­cess to mod­els, on what ba­sis can we de­cide whether our mod­els are caused by the world in a man­ner that pro­duces an ac­cu­rate pic­ture. To de­ter­mine this, it seems we again need un­medi­ated ac­cess to the world.

Back to Pragmatism

Ul­ti­mately, it seems to me the only clear crite­rion the cor­re­spon­dence the­o­rist can es­tab­lish for cor­re­lat­ing the model with the world is ac­tual em­piri­cal suc­cess. Use the model and see if it works for you, if it helps you at­tain your goals. But this is ex­actly the same as the prag­matic mode of eval­u­a­tion which I de­scribed above. And the rep­re­sen­ta­tional mode of eval­u­a­tion is sup­posed to differ from this.

The cor­re­spon­dence the­o­rist could say that prag­matic suc­cess is a proxy for rep­re­sen­ta­tional suc­cess. Not a perfect proxy, but good enough. The re­sponse is, “How do you know?” If you have no in­de­pen­dent means of de­ter­min­ing rep­re­sen­ta­tional suc­cess, if you have no means of cal­ibra­tion, how can you pos­si­bly de­ter­mine whether or not prag­matic suc­cess is a good proxy for rep­re­sen­ta­tional suc­cess? I mean, I guess you can just as­sert that a model that is ex­tremely prag­mat­i­cally suc­cess­ful for a wide range of goals also cor­re­sponds well with re­al­ity, but how does that as­ser­tion help your the­ory of truth? It seems otiose. Bet­ter to just as­so­ci­ate truth with prag­matic suc­cess it­self, rather than adding the un­jus­tifi­able as­ser­tion to res­cue the cor­re­spon­dence the­ory.

So yeah, ul­ti­mately I think the sec­ond of the two means of eval­u­at­ing mod­els I de­scribed at the be­gin­ning (cor­re­spon­dence) can only re­ally es­tab­lish co­her­ence be­tween your var­i­ous mod­els, not co­her­ence be­tween your mod­els and the world. Since that sort of eval­u­a­tion is not world-in­volv­ing, it is not the cor­rect ac­count of truth. Prag­matic eval­u­a­tion, on the other hand, *is* world-in­volv­ing. You’re test­ing your mod­els against the world, see­ing how effec­tive they are at helping you ac­com­plish your goal. That is the ap­pro­pri­ate nor­ma­tive re­la­tion­ship be­tween your be­liefs and the world, so if any­thing de­serves to be called “truth”, it’s prag­matic suc­cess, not cor­re­spon­dence.

This has con­se­quences for our con­cep­tion of what “re­al­ity” is. If you’re a cor­re­spon­dence the­o­rist, you think re­al­ity must have some form of struc­tural similar­ity to our be­liefs. Without some similar­ity in struc­ture (or at least po­ten­tial similar­ity) it’s hard to say how one mean­ingfully could talk about be­liefs rep­re­sent­ing re­al­ity or cor­re­spond­ing to re­al­ity. Prag­ma­tism, on the other hand, has a much thin­ner con­cep­tion of re­al­ity. The real world, on the prag­matic con­cep­tion is just an ex­ter­nal con­straint on the effi­cacy of our mod­els. We try to achieve cer­tain goals us­ing our mod­els and some­thing pushes back, stymie­ing our efforts. Then we need to build im­proved mod­els in or­der to coun­ter­act this re­sis­tance. Bare un­con­cep­tu­al­ized re­al­ity, on this view, is not a highly struc­tured field whose struc­ture we are try­ing to grasp. It is a brute, ba­sic con­straint on effec­tive ac­tion.

It turns out that work­ing around this con­straint re­quires us to build com­plex mod­els—sci­en­tific mod­els, per­cep­tual mod­els, and more. Th­ese mod­els be­come prox­ies for re­al­ity, and we treat var­i­ous mod­els as “trans­par­ent”, as giv­ing us a di­rect view of re­al­ity, in var­i­ous con­texts. This is a use­ful tool for deal­ing with the con­straints offered by re­al­ity. The mod­els are highly struc­tured, so in many con­texts it makes sense to talk about re­al­ity as highly struc­tured, and to talk about our other mod­els match­ing re­al­ity. But it is also im­por­tant to re­al­ize that when we say “re­al­ity” in those con­texts, we are re­ally talk­ing about some model, and in other con­texts that model need not be treated as trans­par­ent. Not re­al­iz­ing this is an in­stance of the mind pro­jec­tion fal­lacy. If you want a con­text-in­de­pen­dent, model-in­de­pen­dent no­tion of re­al­ity, I think you can say no more about it than “a con­straint on our mod­els’ effi­cacy”.

That sort of re­al­ity is not some­thing you rep­re­sent (since rep­re­sen­ta­tion as­sumes struc­tural similar­ity), it’s some­thing you work around. Our mod­els don’t mimic that re­al­ity, they are tools we use to fa­cil­i­tate effec­tive ac­tion un­der the con­straints posed by re­al­ity. All of this, as I said at the be­gin­ning, is goal and con­text de­pen­dent, un­like the pur­ported cor­re­spon­dence the­ory mode of eval­u­at­ing mod­els. That may not be satis­fac­tory, but I think it’s the best we have. Prag­ma­tist the­ory of truth for the win.