The Problematic Third Person Perspective

[Epistemic status: I now endorse this again. Michael pointed out a possibility for downside risk with losing mathematical ability, which initially made me update away from the view here. However, some experience noticing what it is like to make certain kinds of mathematical progress made me return to the view presented here. Maybe don’t take this post as inspiration to engage in extreme rejection of objectivity.]

There are a number of conversational norms based on the idea of an imaginary impartial observer who needs to be convinced. It’s the adversarial courtroom model of conversation. Better norms, such as common crux, can be established by recognizing that a conversation is taking place between two people.

Burden-of-proof is one of these problematic ideas. The idea that there is some kind of standard which would put the burden on one person or another would only make sense if there were a judge to convince. If anything, it would be better to say the burden of proof is on both people in any argument, in the sense that they are responsible for conveying their own views to the other person. If burden-of-proof is about establishing that they “should” give in to your position, it accomplishes nothing; you need to convince them of that, not yourself. If burden-of-proof is about establishing that you don’t have to believe them until they say more… well, that was true anyway, but perhaps speaks to a lack of curiosity on your part.

More generally, this external-judge intuition promotes the bad model that there are objective standards of logic which must be adhered to in a debate. There are epistemic standards which it is good to adhere to, including logic and notions of probabilistic evidence. But, if the other person has different standards, then you have to either work with them or discuss the differences. There’s a failure mode of the overly rationalistic where you just get angry that their arguments are illogical and they’re not accepting your perfectly-formatted arguments, so you try to get them to bow down to your standards by force of will. (The same failure mode applies to treating definitions as objective standards which must be adhered to.) What good does it do to continue arguing with them via standards you already know differ from theirs? Try to understand and engage with their real reasons rather than replacing them with imaginary things.

Actually, it’s even worse than this, because you don’t know your own standards of evidence completely. So, the imaginary impartial judge is also interfering with your ability to get in touch with your real reasons, what you really think, and what might sway you one way or the other. If your mental motion is to reach for justifications which the impartial judge would accept, you are rationalizing rather than finding your true rejection. You have to realize that you’re using standards of evidence that you yourself don’t fully understand, and live in that world—otherwise you rob yourself of the ability to improve your tools.

This happens in two ways, that I can think of.

  • Maybe your explicit standards are good, but not perfect. You notice beliefs that are not up to your standards, and you drop them reflexively. This might be a good idea most of the time, but there are two things wrong with the policy. First, you might have dropped a good belief. You could have done better by checking which you trusted more in this instance: the beliefs, or your standards of belief. Second, you’ve missed an opportunity to improve your explicit standards. You could have explored your reasons for believing what you did, and compared them to your explicit standards for belief.

  • Maybe you don’t notice the difference between your explicit standards and the way you actually arrive at your beliefs. You assume implicitly that if you believe something strongly, it’s because there are strong reasons of the sort you endorse. This is especially likely if the beliefs pattern-match to the sort of thing your standards endorse; for example, being very sciency. As a result, you miss an opportunity to notice that you’re rationalizing something. You would have done better to first look for the reasons you really believed the thing, and then check whether they meet your explicit standards and whether the belief still seems worth endorsing.

So far, I’ve argued that the imaginary judge creates problems in two domains: navigating disagreements with other people, and navigating your own epistemic standards. I’ll note a third domain where the judge seems problematic: judging your own actions and decisions. Many people use an imaginary judge to guide their actions. This leads to pitfalls such as moral self-licensing, in which doing good things gives you a license to do more bad things (setting up a budget makes you feel good enough about your finances that you can go on a spending spree, eating a salad for lunch makes you more likely to treat yourself with ice cream after work, etc). Getting rid of the internal judge is an instance of Nate’s Replacing Guilt, and carries similar risks: if you’re currently using the internal judge for a bunch of important things, you have to either make sure you replace it with other working strategies, or be OK with kicking those things to the roadside (at least temporarily).

Similarly with the other two categories I mentioned. Noticing the dysfunctions of the imaginary-judge perspective should not make you immediately remove it; invoke Chesterton’s Fence. However, I would encourage you to experiment with removing the imaginary third person from your conversations, and seeing what you do when you remind yourself that there’s no one looking over your shoulder in your private mental life. I think this relates to a larger ontological shift which Val was also pointing toward in In Praise of Fake Frameworks. There is no third-person perspective. There is no view from nowhere. This isn’t a rejection of reductionism, but a reminder that we haven’t finished yet. This isn’t a rejection of the principles of rationality, but a reminder that we are created already in motion, and there is no argument so persuasive it would move a rock.

And, more basically, it is a reminder that the map is not the territory, because humans confuse the two by default. The picture in your head isn’t what’s there to be seen. Putting pieces of your judgement inside an imaginary impartial judge doesn’t automatically make it true. Perhaps it does really make it more trustworthy—you “promote” your better heuristics by wrapping them up inside the judge, giving them authority over the rest. But, this system has its problems. It can create perverse incentives on the other parts of your mind, to please the judge in ways that let them get away with what they want. It can make you blind to other ways of being. It can make you think you’ve avoided map-territory confusion once and for all—“See? It’s written right there on my soul: DO NOT CONFUSE MAP AND TERRITORY. It is simply something I don’t do.”—while really passing the responsibility to a special part of your map which is now almost always confused for the territory.

So, laugh at the judge a little. Look out for your real reasons for thinking and doing things. Notice whether your arguments seem tailored to convince your judge rather than the person in front of you. See where it leads you.