In Defense of Epistemic Empathy

Link post

TLDR: Why think your ideological opponents are unreasonable? Common reasons: their views are (1) absurd, or (2) refutable, or (3) baseless, or (4) conformist, or (5) irrational. None are convincing.


Elizabeth is skeptical about the results of the 2020 election. Theo thinks Republicans are planning to institute a theocracy. Alan is convinced that AI will soon take over the world.

You probably think some (or all) of them are unhinged.

As I’ve argued before, we seem to be losing our epistemic empathy: our ability to both (1) be convinced that someone’s opinions are wrong, and yet (2) acknowledge that they might hold those opinions for reasonable reasons. For example, since the 90s our descriptions of others as ‘crazy’, ‘stupid’ or ‘fools’ has skyrocketed:

I think this is a mistake. Lots of my work aims to help us recover our epistemic empathy—to argue that reasonable processes can drive such disagreements, and that we have little evidence that irrationality (the philosophers’ term for being “crazy”, “stupid”, or a “fool”) explains it.

The most common reaction: “Clever argument. But surely you don’t believe it!”

I do.

Obviously people sometimes act and think irrationally. Obviously that sometimes helps explain how they end up with mistaken opinions. The question is whether we have good reason to think that this is generically the explanation for why people have such different opinions than we do.

Today, I want to take a critical look at some of the arguments people give for suspending their epistemic empathy: (1) that their views are absurd; (2) that the questions have easy answers; (3) that they don’t have good reasons for their beliefs; (4) that they’re just conforming to their group; and (5) that they’re irrational.

None are convincing.

Absurdity.

“Sure, reasonable people can disagree on some topics. But the opinions of Elizabeth, Theo, and Alan are so absurd that only irrationality could explain it.”

This argument over-states the power of rationality.

Spend a few years in academia, and you’ll see why. Especially in philosophy, it’ll become extremely salient that reasonable people often wind up with absurd views.

David Lewis thought that there were talking donkeys. (Since the best metaphysical system is one in which every possible world we can imagine is the way some spatio-temporally isolated world actually is.)

Timothy Williamson thinks that it’s impossible for me to not have existed—even if I’d never been born, I would’ve been something or other. (Since the best logical system is one on which necessarily everything necessarily exists.)

Peter Singer thinks that the fact that you failed to give $4,000 to the Against Malaria Fund this morning is the moral equivalent of ignoring a drowning toddler as you walked into work. (Since there turns out to be no morally significant difference between the cases.)

And plenty of reasonable people (including sophisticated philosophers) think both of the following:

  1. It’s monstrous to run over a bunny instead of slamming on your brakes, even if doing so would hold up traffic significantly; yet

  2. It’s totally fine to eat the carcass of an animal that was tortured for its entire life (in a factory farm), instead of eating a slightly-less-exciting meal of beans and rice.

David Lewis, Tim Williamson, Peter Singer, and many who believe both (1) and (2) are brilliant, careful thinkers. Rationality is no guard against absurdity.

Ease.

“Unlike philosophical disputes, political issues just aren’t that difficult.”

This argument belies common sense.

There are plenty of easy questions that we are not polarized over. Is brushing you teeth a good idea? Are Snickers bars healthy? What color is grass? Etc.

Meanwhile, the sorts of issues that people polarize over almost always involve complex, chaotic systems that only large networks of people (with strong norms of trust, honesty, and reliability) can hope to figure out. Hot-button political issues—vaccine safety, election integrity, climate change, police violence, gender dynamics, etc.—are all topics which require amalgamating massive amounts of disparate evidence from innumerable sources.

If you doubt this, switch from qualitative to quantitative questions. Instead of “Is the climate changing?” ask “How many degrees will atmospheric temperatures increase by 2100?” Instead of “Is police violence a problem?” ask “How much does police violence (vs. economic inequality, generational poverty, institutional racism, etc.) harm racial minorities?” Even with complete institutional trust, we don’t know!

Back to the qualitative questions. They may be “easy” to answer for anyone who shares your life experiences, social circles, and patterns of trust. But most people who disagree with you don’t share them.

Nor is it easy to figure out whom to trust. It may seem obvious to you that the trustworthy sources are X, Y, and Z. But networks of trust are built up over an entire lifetime, amalgamating countless experiences of who-said-what-when—you can’t figure it out from the armchair or a quick Google search.

To see this, imagine what would happen if tomorrow all your friends and co-workers started voicing extreme skepticism about your favorite news networks and scientific organizations. I doubt it’d take long before you to became unsure whether to trust them.

But that is the position of your political opponents! Most of their friends and co-workers do think that your favored (X, Y, and Z) sources are unreliable. And, more generally, they’ve had a different lifetime of amalgamating different experiences leading to different networks of trust. No wonder they think differently—you would too, in their shoes.

Baselessness.

“I’ve talked to people who believe these things, and they don’t have any good reasons.”

This argument underestimates the communicative divide between people with radically different viewpoints.

I recently had the following experience. One day I gave a lecture to a room full of philosophers who said (I hope, honestly!) that I gave articulate defense of the rationality of a form of confirmation bias. The next day, I was having coffee with a friend who’s a biologist, and I tried to explain the idea.

It didn’t go well. I gave examples that were confusing. I referred to concepts they didn’t know. I started to explain them, only to realize the concept was unnecessary. I backtracked and vacillated. I was an inarticulate mess.

Most academics have had similar experiences. Why?

Conversations always take place within a common ground of mutual presuppositions. This is extremely useful, because it allows us to move much quicker over familiar territory to get to new ideas—at least when our audience shares our presuppositions.

But it causes a problem when we get used to talking in that way. When we talk to someone who doesn’t share those presuppositions, we find ourselves having to unlearn a bunch of conversational habits on the fly. This is hard, so often the conversation runs aground. (This is why academics are so often so bad at explaining their research to non-academics.)

Talking across the political aisle raises exactly the same problem: it pulls the conversational rug of shared presuppositions out from under us.

Consider the following thought experiment. Suppose you think that climate-change is human caused. Now you find yourself in an Uber, talking to someone who—though open and curious—is unsure about that. It’s clear that he watches completely different media than you, has no direct experience with scientists or the institution of science, has a completely different social network, and is generically skeptical about powerful institutions. You have 5 minutes to explain why you believe in climate change. How well do you think you’ll do?

Not well! (If you doubt this, try it—you’re likely suffering from an illusion of explanatory depth.)

The result? Your interlocutor would probably come away thinking that you don’t have any good reasons for your views. But the fact that you seemed this way to him in a brief conversation obviously doesn’t show that you don’t have good reasons for your beliefs—rather, it shows how hard it is to convey those good reasons across such large communicative divide.

Conformity.

“People are too conformist, and to the wrong sources.”

This argument uses a double-standard.

Either (i) people should listen to their social circles and trusted authorities, or (ii) they shouldn’t.

If (i) they should listen to their social circles and trusted authorities, the result will be that those with very different patterns of social trust than you should believe very different things. This is what happens to people like Elizabeth. When their friends, co-workers, and regular media outlets spend months talking about suspicious facts about the 2020 election, of course it’s reasonable for them to become skeptical. (Wouldn’t you do the same, if your friends and regular media outlets started saying such things about—say—the 2016 election?)

On the other hand, if (ii) people shouldn’t listen to their social circles and trusted authorities, then consequence will be excess skepticism. This is what happens to those who are skeptical of vaccines, climate change, or science generally. In fact, people who believe in conspiracy theories have usually “done their own research”— people who are skeptical of the covid-19 vaccines know much more about the fine details of their development than I do.

Psychology.

“Psychology and behavioral economics have shown, beyond any doubt, that people are systematically irrational.”

They really haven’t.

That’s what my posts are about. It’s my attempt to step outside my presuppositional bubble and explain why there’s reason to be skeptical of narratives of irrationality.

In short: empirical work on irrationality always presupposes normative claims about what rational people would think or do in the situation. Often those claims are oversimplified—or just wrong.

I’ve argued that the is true for research on overconfidence, belief persistence, confirmation bias, the conjunction fallacy, the gambler’s fallacy, and polarization.

I’m not alone. Plenty of people—most notably, the cognitive scientists and AI researchers who try to replicate the everyday feats of human cognition—agree. We’ve learned a lot about how the mind works from studying the ways inferences can go wrong. But we have not learned that the explanation is that people are dumb, stupid, or foolish.


That’s my attempt to say why I’m unconvinced by the common reasons for suspending epistemic empathy. Maybe we really should think that those who disagree with us have good reasons to.

But I recognize the meta-point: lots of people disagree with me, about this! They’re convinced that irrationality is what drives many societal disagreements. And I’d be a hypocrite if I didn’t think those beliefs were reasonable, too. I do. (Which is not to say I think those beliefs are true—thinking a belief is reasonable is compatible with thinking it’s wrong.)

So please: share your reasons! I want to know why you think your political opponents are irrational, so I can better work out why (and to what degree) I disagree.