Motivated reasoning is a misfire of a generally helpful heuristic: try and understand why what other people are telling you makes sense.
In a high trust setting, people are usually well-served by assuming that there’s a good reason for what they’re told, what they believe, and what they’re doing. Saying, “figure out an explanation for why your current plans make sense” is motivated reasoning, but it’s also a way to just remember what the heck you’re doing and to coordinate effectively with others by anticipating how they’ll behave.
The thing to explain, I think, is why we apply this heuristic in less than full trust settings. My explanation for that is that this sense-making is still adaptive even in pretty low-trust settings. The best results you can get in a low-trust (or parasitic) setting are worse than you’d get in a higher-trust setting, but sense-making it typically leads to better outcomes than not.
In particular, while it’s easy in retrospect to pick a specific action (playing Civ all night) and say “I shouldn’t have sense-made that,” it’s hard to figure out in a forward-looking way which settings or activities do or don’t deserve sense-making. We just do it across the board, unless life has made us into experts on how to calibrate our sense-making. This might look like having enough experience with a liar to disregard everything they’re saying, and perhaps even to sense-make “ah, they’re lying to me like THIS for THAT reason.”
In summary, motivated reasoning is just sense-making, which is almost always net adaptive. Specific products, people and organizations take advantage of this to exploit people’s sense-making in limited ways. If we focus on the individual misfires in retrospect, it looks maladaptive. But if you had to predict in advance whether or not to sense-make any given thing, you’d be hard-pressed to do better than you’re already doing, which probably involves sense-making quite a bit of stuff most of the time.
Motivated reasoning is a misfire of a generally helpful heuristic: try and understand why what other people are telling you makes sense.
In a high trust setting, people are usually well-served by assuming that there’s a good reason for what they’re told, what they believe, and what they’re doing. Saying, “figure out an explanation for why your current plans make sense” is motivated reasoning, but it’s also a way to just remember what the heck you’re doing and to coordinate effectively with others by anticipating how they’ll behave.
The thing to explain, I think, is why we apply this heuristic in less than full trust settings. My explanation for that is that this sense-making is still adaptive even in pretty low-trust settings. The best results you can get in a low-trust (or parasitic) setting are worse than you’d get in a higher-trust setting, but sense-making it typically leads to better outcomes than not.
In particular, while it’s easy in retrospect to pick a specific action (playing Civ all night) and say “I shouldn’t have sense-made that,” it’s hard to figure out in a forward-looking way which settings or activities do or don’t deserve sense-making. We just do it across the board, unless life has made us into experts on how to calibrate our sense-making. This might look like having enough experience with a liar to disregard everything they’re saying, and perhaps even to sense-make “ah, they’re lying to me like THIS for THAT reason.”
In summary, motivated reasoning is just sense-making, which is almost always net adaptive. Specific products, people and organizations take advantage of this to exploit people’s sense-making in limited ways. If we focus on the individual misfires in retrospect, it looks maladaptive. But if you had to predict in advance whether or not to sense-make any given thing, you’d be hard-pressed to do better than you’re already doing, which probably involves sense-making quite a bit of stuff most of the time.