The Problematic Third Person Perspective

[Epistemic sta­tus: I now en­dorse this again. Michael pointed out a pos­si­bil­ity for down­side risk with los­ing math­e­mat­i­cal abil­ity, which ini­tially made me up­date away from the view here. How­ever, some ex­pe­rience notic­ing what it is like to make cer­tain kinds of math­e­mat­i­cal progress made me re­turn to the view pre­sented here. Maybe don’t take this post as in­spira­tion to en­gage in ex­treme re­jec­tion of ob­jec­tivity.]

There are a num­ber of con­ver­sa­tional norms based on the idea of an imag­i­nary im­par­tial ob­server who needs to be con­vinced. It’s the ad­ver­sar­ial court­room model of con­ver­sa­tion. Bet­ter norms, such as com­mon crux, can be es­tab­lished by rec­og­niz­ing that a con­ver­sa­tion is tak­ing place be­tween two peo­ple.

Bur­den-of-proof is one of these prob­le­matic ideas. The idea that there is some kind of stan­dard which would put the bur­den on one per­son or an­other would only make sense if there were a judge to con­vince. If any­thing, it would be bet­ter to say the bur­den of proof is on both peo­ple in any ar­gu­ment, in the sense that they are re­spon­si­ble for con­vey­ing their own views to the other per­son. If bur­den-of-proof is about es­tab­lish­ing that they “should” give in to your po­si­tion, it ac­com­plishes noth­ing; you need to con­vince them of that, not your­self. If bur­den-of-proof is about es­tab­lish­ing that you don’t have to be­lieve them un­til they say more… well, that was true any­way, but per­haps speaks to a lack of cu­ri­os­ity on your part.

More gen­er­ally, this ex­ter­nal-judge in­tu­ition pro­motes the bad model that there are ob­jec­tive stan­dards of logic which must be ad­hered to in a de­bate. There are epistemic stan­dards which it is good to ad­here to, in­clud­ing logic and no­tions of prob­a­bil­is­tic ev­i­dence. But, if the other per­son has differ­ent stan­dards, then you have to ei­ther work with them or dis­cuss the differ­ences. There’s a failure mode of the overly ra­tio­nal­is­tic where you just get an­gry that their ar­gu­ments are illog­i­cal and they’re not ac­cept­ing your perfectly-for­mat­ted ar­gu­ments, so you try to get them to bow down to your stan­dards by force of will. (The same failure mode ap­plies to treat­ing defi­ni­tions as ob­jec­tive stan­dards which must be ad­hered to.) What good does it do to con­tinue ar­gu­ing with them via stan­dards you already know differ from theirs? Try to un­der­stand and en­gage with their real rea­sons rather than re­plac­ing them with imag­i­nary things.

Ac­tu­ally, it’s even worse than this, be­cause you don’t know your own stan­dards of ev­i­dence com­pletely. So, the imag­i­nary im­par­tial judge is also in­terfer­ing with your abil­ity to get in touch with your real rea­sons, what you re­ally think, and what might sway you one way or the other. If your men­tal mo­tion is to reach for jus­tifi­ca­tions which the im­par­tial judge would ac­cept, you are ra­tio­nal­iz­ing rather than find­ing your true re­jec­tion. You have to re­al­ize that you’re us­ing stan­dards of ev­i­dence that you your­self don’t fully un­der­stand, and live in that world—oth­er­wise you rob your­self of the abil­ity to im­prove your tools.

This hap­pens in two ways, that I can think of.

  • Maybe your ex­plicit stan­dards are good, but not perfect. You no­tice be­liefs that are not up to your stan­dards, and you drop them re­flex­ively. This might be a good idea most of the time, but there are two things wrong with the policy. First, you might have dropped a good be­lief. You could have done bet­ter by check­ing which you trusted more in this in­stance: the be­liefs, or your stan­dards of be­lief. Se­cond, you’ve missed an op­por­tu­nity to im­prove your ex­plicit stan­dards. You could have ex­plored your rea­sons for be­liev­ing what you did, and com­pared them to your ex­plicit stan­dards for be­lief.

  • Maybe you don’t no­tice the differ­ence be­tween your ex­plicit stan­dards and the way you ac­tu­ally ar­rive at your be­liefs. You as­sume im­plic­itly that if you be­lieve some­thing strongly, it’s be­cause there are strong rea­sons of the sort you en­dorse. This is es­pe­cially likely if the be­liefs pat­tern-match to the sort of thing your stan­dards en­dorse; for ex­am­ple, be­ing very sci­ency. As a re­sult, you miss an op­por­tu­nity to no­tice that you’re ra­tio­nal­iz­ing some­thing. You would have done bet­ter to first look for the rea­sons you re­ally be­lieved the thing, and then check whether they meet your ex­plicit stan­dards and whether the be­lief still seems worth en­dors­ing.

So far, I’ve ar­gued that the imag­i­nary judge cre­ates prob­lems in two do­mains: nav­i­gat­ing dis­agree­ments with other peo­ple, and nav­i­gat­ing your own epistemic stan­dards. I’ll note a third do­main where the judge seems prob­le­matic: judg­ing your own ac­tions and de­ci­sions. Many peo­ple use an imag­i­nary judge to guide their ac­tions. This leads to pit­falls such as moral self-li­cens­ing, in which do­ing good things gives you a li­cense to do more bad things (set­ting up a bud­get makes you feel good enough about your fi­nances that you can go on a spend­ing spree, eat­ing a salad for lunch makes you more likely to treat your­self with ice cream af­ter work, etc). Get­ting rid of the in­ter­nal judge is an in­stance of Nate’s Re­plac­ing Guilt, and car­ries similar risks: if you’re cur­rently us­ing the in­ter­nal judge for a bunch of im­por­tant things, you have to ei­ther make sure you re­place it with other work­ing strate­gies, or be OK with kick­ing those things to the road­side (at least tem­porar­ily).

Similarly with the other two cat­e­gories I men­tioned. Notic­ing the dys­func­tions of the imag­i­nary-judge per­spec­tive should not make you im­me­di­ately re­move it; in­voke Ch­ester­ton’s Fence. How­ever, I would en­courage you to ex­per­i­ment with re­mov­ing the imag­i­nary third per­son from your con­ver­sa­tions, and see­ing what you do when you re­mind your­self that there’s no one look­ing over your shoulder in your pri­vate men­tal life. I think this re­lates to a larger on­tolog­i­cal shift which Val was also point­ing to­ward in In Praise of Fake Frame­works. There is no third-per­son per­spec­tive. There is no view from nowhere. This isn’t a re­jec­tion of re­duc­tion­ism, but a re­minder that we haven’t finished yet. This isn’t a re­jec­tion of the prin­ci­ples of ra­tio­nal­ity, but a re­minder that we are cre­ated already in mo­tion, and there is no ar­gu­ment so per­sua­sive it would move a rock.

And, more ba­si­cally, it is a re­minder that the map is not the ter­ri­tory, be­cause hu­mans con­fuse the two by de­fault. The pic­ture in your head isn’t what’s there to be seen. Put­ting pieces of your judge­ment in­side an imag­i­nary im­par­tial judge doesn’t au­to­mat­i­cally make it true. Per­haps it does re­ally make it more trust­wor­thy—you “pro­mote” your bet­ter heuris­tics by wrap­ping them up in­side the judge, giv­ing them au­thor­ity over the rest. But, this sys­tem has its prob­lems. It can cre­ate per­verse in­cen­tives on the other parts of your mind, to please the judge in ways that let them get away with what they want. It can make you blind to other ways of be­ing. It can make you think you’ve avoided map-ter­ri­tory con­fu­sion once and for all—“See? It’s writ­ten right there on my soul: DO NOT CONFUSE MAP AND TERRITORY. It is sim­ply some­thing I don’t do.”—while re­ally pass­ing the re­spon­si­bil­ity to a spe­cial part of your map which is now al­most always con­fused for the ter­ri­tory.

So, laugh at the judge a lit­tle. Look out for your real rea­sons for think­ing and do­ing things. No­tice whether your ar­gu­ments seem tai­lored to con­vince your judge rather than the per­son in front of you. See where it leads you.