# The Problematic Third Person Perspective

[Epistemic sta­tus: I now en­dorse this again. Michael pointed out a pos­si­bil­ity for down­side risk with los­ing math­e­mat­i­cal abil­ity, which ini­tially made me up­date away from the view here. How­ever, some ex­pe­rience notic­ing what it is like to make cer­tain kinds of math­e­mat­i­cal progress made me re­turn to the view pre­sented here. Maybe don’t take this post as in­spira­tion to en­gage in ex­treme re­jec­tion of ob­jec­tivity.]

There are a num­ber of con­ver­sa­tional norms based on the idea of an imag­i­nary im­par­tial ob­server who needs to be con­vinced. It’s the ad­ver­sar­ial court­room model of con­ver­sa­tion. Bet­ter norms, such as com­mon crux, can be es­tab­lished by rec­og­niz­ing that a con­ver­sa­tion is tak­ing place be­tween two peo­ple.

Bur­den-of-proof is one of these prob­le­matic ideas. The idea that there is some kind of stan­dard which would put the bur­den on one per­son or an­other would only make sense if there were a judge to con­vince. If any­thing, it would be bet­ter to say the bur­den of proof is on both peo­ple in any ar­gu­ment, in the sense that they are re­spon­si­ble for con­vey­ing their own views to the other per­son. If bur­den-of-proof is about es­tab­lish­ing that they “should” give in to your po­si­tion, it ac­com­plishes noth­ing; you need to con­vince them of that, not your­self. If bur­den-of-proof is about es­tab­lish­ing that you don’t have to be­lieve them un­til they say more… well, that was true any­way, but per­haps speaks to a lack of cu­ri­os­ity on your part.

More gen­er­ally, this ex­ter­nal-judge in­tu­ition pro­motes the bad model that there are ob­jec­tive stan­dards of logic which must be ad­hered to in a de­bate. There are epistemic stan­dards which it is good to ad­here to, in­clud­ing logic and no­tions of prob­a­bil­is­tic ev­i­dence. But, if the other per­son has differ­ent stan­dards, then you have to ei­ther work with them or dis­cuss the differ­ences. There’s a failure mode of the overly ra­tio­nal­is­tic where you just get an­gry that their ar­gu­ments are illog­i­cal and they’re not ac­cept­ing your perfectly-for­mat­ted ar­gu­ments, so you try to get them to bow down to your stan­dards by force of will. (The same failure mode ap­plies to treat­ing defi­ni­tions as ob­jec­tive stan­dards which must be ad­hered to.) What good does it do to con­tinue ar­gu­ing with them via stan­dards you already know differ from theirs? Try to un­der­stand and en­gage with their real rea­sons rather than re­plac­ing them with imag­i­nary things.

Ac­tu­ally, it’s even worse than this, be­cause you don’t know your own stan­dards of ev­i­dence com­pletely. So, the imag­i­nary im­par­tial judge is also in­terfer­ing with your abil­ity to get in touch with your real rea­sons, what you re­ally think, and what might sway you one way or the other. If your men­tal mo­tion is to reach for jus­tifi­ca­tions which the im­par­tial judge would ac­cept, you are ra­tio­nal­iz­ing rather than find­ing your true re­jec­tion. You have to re­al­ize that you’re us­ing stan­dards of ev­i­dence that you your­self don’t fully un­der­stand, and live in that world—oth­er­wise you rob your­self of the abil­ity to im­prove your tools.

This hap­pens in two ways, that I can think of.

• Maybe your ex­plicit stan­dards are good, but not perfect. You no­tice be­liefs that are not up to your stan­dards, and you drop them re­flex­ively. This might be a good idea most of the time, but there are two things wrong with the policy. First, you might have dropped a good be­lief. You could have done bet­ter by check­ing which you trusted more in this in­stance: the be­liefs, or your stan­dards of be­lief. Se­cond, you’ve missed an op­por­tu­nity to im­prove your ex­plicit stan­dards. You could have ex­plored your rea­sons for be­liev­ing what you did, and com­pared them to your ex­plicit stan­dards for be­lief.

• Maybe you don’t no­tice the differ­ence be­tween your ex­plicit stan­dards and the way you ac­tu­ally ar­rive at your be­liefs. You as­sume im­plic­itly that if you be­lieve some­thing strongly, it’s be­cause there are strong rea­sons of the sort you en­dorse. This is es­pe­cially likely if the be­liefs pat­tern-match to the sort of thing your stan­dards en­dorse; for ex­am­ple, be­ing very sci­ency. As a re­sult, you miss an op­por­tu­nity to no­tice that you’re ra­tio­nal­iz­ing some­thing. You would have done bet­ter to first look for the rea­sons you re­ally be­lieved the thing, and then check whether they meet your ex­plicit stan­dards and whether the be­lief still seems worth en­dors­ing.

So far, I’ve ar­gued that the imag­i­nary judge cre­ates prob­lems in two do­mains: nav­i­gat­ing dis­agree­ments with other peo­ple, and nav­i­gat­ing your own epistemic stan­dards. I’ll note a third do­main where the judge seems prob­le­matic: judg­ing your own ac­tions and de­ci­sions. Many peo­ple use an imag­i­nary judge to guide their ac­tions. This leads to pit­falls such as moral self-li­cens­ing, in which do­ing good things gives you a li­cense to do more bad things (set­ting up a bud­get makes you feel good enough about your fi­nances that you can go on a spend­ing spree, eat­ing a salad for lunch makes you more likely to treat your­self with ice cream af­ter work, etc). Get­ting rid of the in­ter­nal judge is an in­stance of Nate’s Re­plac­ing Guilt, and car­ries similar risks: if you’re cur­rently us­ing the in­ter­nal judge for a bunch of im­por­tant things, you have to ei­ther make sure you re­place it with other work­ing strate­gies, or be OK with kick­ing those things to the road­side (at least tem­porar­ily).

Similarly with the other two cat­e­gories I men­tioned. Notic­ing the dys­func­tions of the imag­i­nary-judge per­spec­tive should not make you im­me­di­ately re­move it; in­voke Ch­ester­ton’s Fence. How­ever, I would en­courage you to ex­per­i­ment with re­mov­ing the imag­i­nary third per­son from your con­ver­sa­tions, and see­ing what you do when you re­mind your­self that there’s no one look­ing over your shoulder in your pri­vate men­tal life. I think this re­lates to a larger on­tolog­i­cal shift which Val was also point­ing to­ward in In Praise of Fake Frame­works. There is no third-per­son per­spec­tive. There is no view from nowhere. This isn’t a re­jec­tion of re­duc­tion­ism, but a re­minder that we haven’t finished yet. This isn’t a re­jec­tion of the prin­ci­ples of ra­tio­nal­ity, but a re­minder that we are cre­ated already in mo­tion, and there is no ar­gu­ment so per­sua­sive it would move a rock.

And, more ba­si­cally, it is a re­minder that the map is not the ter­ri­tory, be­cause hu­mans con­fuse the two by de­fault. The pic­ture in your head isn’t what’s there to be seen. Put­ting pieces of your judge­ment in­side an imag­i­nary im­par­tial judge doesn’t au­to­mat­i­cally make it true. Per­haps it does re­ally make it more trust­wor­thy—you “pro­mote” your bet­ter heuris­tics by wrap­ping them up in­side the judge, giv­ing them au­thor­ity over the rest. But, this sys­tem has its prob­lems. It can cre­ate per­verse in­cen­tives on the other parts of your mind, to please the judge in ways that let them get away with what they want. It can make you blind to other ways of be­ing. It can make you think you’ve avoided map-ter­ri­tory con­fu­sion once and for all—“See? It’s writ­ten right there on my soul: DO NOT CONFUSE MAP AND TERRITORY. It is sim­ply some­thing I don’t do.”—while re­ally pass­ing the re­spon­si­bil­ity to a spe­cial part of your map which is now al­most always con­fused for the ter­ri­tory.

So, laugh at the judge a lit­tle. Look out for your real rea­sons for think­ing and do­ing things. No­tice whether your ar­gu­ments seem tai­lored to con­vince your judge rather than the per­son in front of you. See where it leads you.

• In this case, I think it’s worth be­ing very VERY cu­ri­ous as to how that judge got in there in the first place. It’s also prob­a­bly worth even­tu­ally do­ing psy­cholog­i­cal re­search in or­der to clas­sify types of judge, in case they aren’t all the same. Do math­e­mat­i­ci­ans above a cer­tain cal­iber all pos­sess in­ter­nal judges with a com­mon stan­dard for proof? How does this phe­nomenon re­late to ac­tual judges?

In gen­eral, I would ex­pect a per­son fol­low­ing this ad­vice to, in the av­er­age case, di­verge from the pro­cess of cre­at­ing a map in cor­re­spon­dence with the ter­ri­tory, to­wards the re­place­ment of the map with a feed­back sys­tem con­di­tion­ing model-free har­mony. I would ex­pect that their mind would grad­u­ally tran­si­tion from ask­ing ‘is this true’ to ask­ing ‘is this what power wants me to say’, and even­tu­ally to come to see truth as a dread­ful con­straint on safety rather than as a sup­port with which to achieve safety. I would ex­pect them to grow in their abil­ity to lead and to sell, but to loose the abil­ity to man­age, or oth­er­wise con­strain the ac­tions of a group in or­der to di­rect them to­wards some goal other than poli­tics.

That doesn’t at all mean that the ideal mode of cog­ni­tion in­volves such a judge. Just that col­lab­o­ra­tive cog­ni­tion re­quires a com­mon set of pro­to­cols and this seems to be the de­fault such set of pro­to­cols for con­struc­tive col­lab­o­ra­tion, while other pro­to­cols seem fa­vored by preda­tory col­lab­o­ra­tion and seem likely to emerge if not sup­pressed.

• You make an in­ter­est­ing point.

For many peo­ple (but not for me), it seems the judge ex­plic­itly speaks in the voice of one of their par­ents.

Cer­tainly I think the judge is serv­ing a group-co­or­di­na­tion role. It man­ages out­ward-fac­ing jus­tifi­a­bil­ity. Hence, I as­so­ci­ate the judge with crony be­liefs. I in­ter­pret you as say­ing that if the judge didn’t han­dle those, they could start get­ting ev­ery­where—and also that the judge may be as­so­ci­ated with other benefits, as in the case of math­e­mat­i­cal rea­son­ing.

I have ac­tu­ally done away with the judge at times, one time last­ing a whole week. I would use the same lan­guage as be­fore for so­cial co­or­di­na­tion pur­poses, but it wouldn’t carry the same mean­ing—for ex­am­ple, “I feel bad about X” would mean “I wish X could have hap­pened with­out giv­ing any­thing else up”, but carry no feel­ing of con­flict in my mind; nor­mally, it would mean “I am feel­ing con­flicted about my policy around X”.

So, from that per­spec­tive I ex­pect that get­ting rid of the judge tends to make one more epistem­i­cally co­her­ent and less prone to bend thoughts to­ward so­cial con­sen­sus. The so­cial-co­or­di­na­tion role of the judge then has to be re­placed with other strate­gies.

On the other hand, your hy­poth­e­sis doesn’t seem ab­surd to me.

• A few thoughts.

It seems that the judge of­ten has a big part to play in pro­tect­ing the episte­molgy.

I’m guess­ing the strength of your judge and the role it plays de­pends on your open­ness, in the big 5 sense.

For me there is a two step pro­cess. Even if the ar­gu­ments for some­thing aren’t strong, I can “en­ter­tain” an idea, if that idea is re­lated to some­thing im­por­tant. That idea might hang around for a long time, ac­cru­ing ev­i­dence for and against it in my ex­pe­rience and as I think about it more. Only when it passes the judge do you con­fi­dently go around stat­ing it. You can see the start of this type of en­ter­tain­ing in this post.