There is definitely a standard story which says roughly “motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans”. I do not think that story stands up well under examination; when I think of standard day-to-day examples of motivated reasoning, that pattern sounds like a plausible explanation for some-but-a-lot-less-than-all of them.
For example: suppose it’s 10 pm and I’ve been playing Civ all evening. I know that I should get ready for bed now-ish. But… y’know, this turn isn’t a very natural stopping point. And it’s not that bad if I go to bed half an hour late, right? Etc. Obvious motivated reasoning. But man, that motivated reasoning sure does not seem very socially-oriented? Like, sure, you could make up a story about how I’m justifying myself to an imaginary audience or something, but it does not feel like one would have predicted the Civ example in advance from the model “motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans”.
Another class of examples: very often in social situations, the move which will actually get one the most points is to admit fault and apologize. And yet, instead of that, people instinctively spin a story about how they didn’t really do anything wrong. People instinctively spin that story even when it’s pretty damn obvious (if one actually stops to consider it) that apologizing would result in a better outcome for the person in question. Again, you could maybe make up some story about evolving suboptimal heuristics, but this just isn’t the behavior one would predict in advance from the model “motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans”.
A pattern with these examples (and many others): motivated reasoning isn’t mainly about fooling others, it’s about fooling oneself. Or at least a part of oneself. Indeed, there’s plenty of standard wisdom along those lines: “the easiest person to fool is yourself”, etc.
Here’s a model which I think much better matches real-world motivated reasoning. (Note, however, that all the above critique still stands regardless of whether this next model is correct.)
Motivated reasoning simply isn’t adaptive. Even in the ancestral environment, motivated reasoning decreased fitness. It appeared in the first place as an accidental side-effect of an overall-beneficial change in human minds relative to earlier minds, and that change was recent enough that evolution hasn’t had time to fix the anti-adaptive side effects.
There’s more than one hypothesis for what that change could be. Probably something to do with some particular functions within the mind being separated into parts, so that one part of the mind can now sometimes “cheat” by trying to trick another part of the mind, but the separation of those two functions is still overall beneficial.
An example falsifiable prediction of this model: other animals generally do not motivatedly-reason. If the relevant machinery had been around for very long, we would have expected evolution to fix the problem.
There is definitely a standard story which says roughly “motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans”. I do not think that story stands up well under examination; when I think of standard day-to-day examples of motivated reasoning, that pattern sounds like a plausible explanation for some-but-a-lot-less-than-all of them.
For example: suppose it’s 10 pm and I’ve been playing Civ all evening. I know that I should get ready for bed now-ish. But… y’know, this turn isn’t a very natural stopping point. And it’s not that bad if I go to bed half an hour late, right? Etc. Obvious motivated reasoning. But man, that motivated reasoning sure does not seem very socially-oriented? Like, sure, you could make up a story about how I’m justifying myself to an imaginary audience or something, but it does not feel like one would have predicted the Civ example in advance from the model “motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans”.
Another class of examples: very often in social situations, the move which will actually get one the most points is to admit fault and apologize. And yet, instead of that, people instinctively spin a story about how they didn’t really do anything wrong. People instinctively spin that story even when it’s pretty damn obvious (if one actually stops to consider it) that apologizing would result in a better outcome for the person in question. Again, you could maybe make up some story about evolving suboptimal heuristics, but this just isn’t the behavior one would predict in advance from the model “motivated reasoning in humans exists because it is/was adaptive for negotiating with other humans”.
A pattern with these examples (and many others): motivated reasoning isn’t mainly about fooling others, it’s about fooling oneself. Or at least a part of oneself. Indeed, there’s plenty of standard wisdom along those lines: “the easiest person to fool is yourself”, etc.
Here’s a model which I think much better matches real-world motivated reasoning. (Note, however, that all the above critique still stands regardless of whether this next model is correct.)
Motivated reasoning simply isn’t adaptive. Even in the ancestral environment, motivated reasoning decreased fitness. It appeared in the first place as an accidental side-effect of an overall-beneficial change in human minds relative to earlier minds, and that change was recent enough that evolution hasn’t had time to fix the anti-adaptive side effects.
There’s more than one hypothesis for what that change could be. Probably something to do with some particular functions within the mind being separated into parts, so that one part of the mind can now sometimes “cheat” by trying to trick another part of the mind, but the separation of those two functions is still overall beneficial.
An example falsifiable prediction of this model: other animals generally do not motivatedly-reason. If the relevant machinery had been around for very long, we would have expected evolution to fix the problem.