Motivated Reasoning as Bias
There’s a classic paper by Ziva Kunda, The Case for Motivated Reasoning, which I highly recommend. It’s one of the most influential works on how our desires can shape the way we think.
In this paper, Kunda proposes that reasoning can be driven by motivation, and then she divides reasoning into two major categories: those in which the motive is to arrive at an accurate conclusion, whatever it may be, and those in which the motive is to arrive at a particular, directional conclusion.
It seemed to me that this second category of reasoning could be classified as a bias. First, having a motive to reach a particular, directional conclusion leads to a deviation from the truth. Second, it happens systematically. For example, if a guy is driven by a motive to conclude that a girl likes him, then this motive will arise every time he thinks about whether she likes him or not. So it works in the same way as poorly calibrated scales, the Ebbinghaus illusion, or confirmation bias.
Let’s explore it a bit.
1. This bias is not the same as confirmation bias. We know about confirmation bias from Peter Wason’s 2-4-6 experiment. It’s hard to believe that people’s answers in this study were driven by the motive to reach a particular, directional conclusion. Rather, they chose such numbers because our brain tends to confirm a hypothesis rather than challenge it. So we can say that confirmation bias belongs in the first category: reasoning in which the motive is to arrive at an accurate conclusion.
2. We are not aware when a motive to reach an accurate conclusion shifts to a motive to reach a particular, directional conclusion. Sometimes, we may realize later that our motive has changed, but at the moment of the shift, we neither realize it nor have control over it, it’s not our conscious choice. We continue to believe that our motive is to reach an accurate conclusion. If the switch happens consciously, then it has nothing to do with this bias.
It’s like intending to put bread in the drawer, but actually putting it in the fridge. All day, you go on thinking the bread is in the drawer and you realize what happened only when you open the fridge and see the bread. Unfortunately, we rarely get the chance to “open the fridge”.
3. It’s impossible to imagine what it feels like to be biased about a particular question. For example, you read about a man with a motive to reach a particular, directional conclusion that he doesn’t need to see a doctor. And it would be impossible for you to imagine yourself in his place to feel how such a motive would drive you.
I think this is similar to when I’d tell you I’m afraid of the dark, but you don’t share that fear. You try to picture what it’s like, but you can’t, and you might even wonder how it’s possible to be afraid of the dark, that it’s so silly. The same way you might wonder how it’s possible to sincerely believe in such arguments that the man is using to justify not seeing a doctor.
This is likely related to all cognitive biases and partly explains why it can be so hard to explain to someone that cognitive biases exist. It would be much easier if we could vividly imagine what it’s like to be biased about a particular question, in the same way we can imagine cold, pain, or joy.
4. The more a question matters to us, the more likely we will have this bias.
It’s hard to find someone who could have the bias about whether the office coffee is better this week or last week. But it’s much easier to find someone who is likely to have the bias about whether they are performing well at work. And a question can quickly start to matter to us. For example, if you start arguing with a colleague about whether this week’s office coffee is better than last week’s.
5. Being driven by the motive to reach a particular, directional conclusion can negatively affect our other related beliefs. For example, someone who wants to keep smoking, when confronted with counterarguments based on scientific evidence, may begin to devalue science itself. Cognitive dissonance theory illustrates it very well.
Final Thought
Science has blind experiments, randomization, and control groups that help to reduce this bias, but in everyday life we can’t apply these tools. And it may turn out that many of our wrong decisions, especially important ones, made on the way to achieving a strategic goal, are caused not by a lack of information or skills, but by this bias.
Yes. I think this is real and underappreciated in its influence on even rationalist thinking. We’ve all got motivations. See my quick piece on the topic Motivated reasoning/confirmation bias as the most important cognitive bias.
You said Kunda’s original piece divided into reasoning with and without motivation. I’m sure you’re right but I didn’t remember that. As usual, I’d say putting this into categories isn’t as correct or useful as thinking about it on a spectrum: how much do we want to reach the truth and how much are we motivated toward one outcome. It’s tough to think of a topic on which we’ll remain neutral on our desired conclusion for more than the most brief and casual discussion.
I agree that putting this into categories isn’t as correct or useful as thinking about it on a spectrum. At the same time, we should keep in mind that there is no reason to believe that both categories of reasoning (endpoints of the spectrum) involve the same kinds of mechanism