Two Kinds of Irrationality and How to Avoid One of Them

It seems to me that there are two kinds of human irrationality. One could be called “bug” irrationality, not referring to insects but rather bugs in the design of our minds, ways in which our minds could be better designed. This category includes things like hyperbolic discounting (also called myopic discounting), as well as general failures to correctly apply laws of logic and probability. It’s often worth making an effort to correct for this kind of irrationality, but I think some of the discussion of it is overly pessimistic. From an evolutionary point of view, the main reason that this kind of irrationality exists is probably just that flawed rules of thumb which usually work out okay can be more efficient than more rigorous methods.

As Yvain once wrote, “most people are rational enough for their own purposes.” Because of that, I don’t think this kind of rationality is our biggest worry, and it’s not what this post is about. But if you want to do much more reading on the view of this side of irrationality that I’ve just sketched, I recommend reading various papers by psychologist Ricard Samuels and philosopher Stephen Stich, such as Ending the Rationality Wars, Rationality & Psychology, and Reason and Rationality.

The worst examples of human irrationality, in my view, are what could be called “feature” irrationality. Meaning, when irrationality is a feature of our mind, something evolution designed into our minds. Why would evolution do this? Here is Stephen Pinker’s explanation from How the Mind Works:

[Psychologist Robert] Trivers, pursuing his theory of the emotions to its logical conclusion, notes that in a world of walking lie detectors the best strategy is to believe your own lies...

Everyone has heard of “reducing cognitive dissonance,” in which people invent a new opinion to resolve a contradiction in their minds. For example, a person will recall enjoying a boring task if he had agreed to recommend it to others for paltry pay… As originally conceived of by the psychologist Leon Festinger, cognitive dissonance is an unsettled feeling that arises from an inconsistency in one’s beliefs. But that’s not right: there is no contradiction between the proposition “The task is boring” and the proposition “I was pressured into lying that the task was fun.” Another social psychologist, Eliot Aronson, nailed it down: people doctor their beliefs only to eliminate contradiction with the proposition “I am nice and in control.” Cognitive dissonance is always triggered by blatant evidence that you are not as beneficent and effective as you would like people to think. The urge to reduce it is the urge to get your self-serving story straight (pp. 421-423).

In other words, we have evolved to have an irrationally inflated view of ourselves, so as to better sell others on that view. I can think of one other important advantage of irrationality as a feature: coalition building. A coalition may be strengthened by a belief in its own righteousness and the wickedness of its enemies, and group members can signal loyalty by adopting an ideological shibboleths that group members share.

The tragedy of feature-irrationality is that (unsurprisingly, on Darwinian grounds) its costs tend to be borne by other people. Throughout history, many people have believed that martyrs receive rewards in paradise far exceeding any earthly rewards, but only a very few of those people actually become martyrs. Thus, the harm done in practice by that belief about martyrs is relatively small. Much greater harm has been done by the belief that unbelief leads to eternal damnation, and therefore unbelievers should be tortured and killed, both to punish and discourage unbelief.

But my purpose here isn’t to rail against the evils of this kind of irrationality. Rather, my purpose is to suggest a relatively simple method for avoiding the worst of human irrationality. The core idea is one articulated by eugman:

Watch out for when you are sacrificing epistemology for instrumental gains. If there is ever a time where you want to have certain beliefs because it more convenient and you are trying to talk yourself into them, that is a giant red flag.

Since feature-irrationality is all about sacrificing truth for the sake of instrumental gains, being aware of when you’re doing that is the very essence of combating it. And since the instrumental gains are usually in terms of other people’s view of us, we can be more specific: “When you’re trying to figure out what to believe, you can’t care what other people will think of you.”

There are a couple of misunderstandings that need to be avoided here. First, the rule is to not care about what other people will think of you, not not care about what other people think in general. In fact, when you’re trying to figure out what to think about X, it’s generally important to take into account what other well-informed people think about X. If you know that 99% of the experts think that p, you’d be wise to be very, very cautious about concluding not-p. But it’s important to do this for the right reason. The right reason is that other people might know or understand something you don’t. It would still be a mistake to be swayed by fears that high-status people will think less of you if you disagree with them.

Furthermore, you have to really not care what other people will think of you. It does nothing—and can even be counterproductive—to merely make a show of indifference. When someone makes a show of indifference to others’ opinions of them, it’s often a sign they care intensely about what other people think. Pinker observes, “The physicist Richard Feynman wrote two books describing how brilliant, irreverent, and admired he was and called one of them What Do You Care What Other People Think?” (Pinker, p. 361) Going too far in not caring what other people think can be a kind of countersignaling or even, in extreme cases, what art historian Quentin Bell called “conspicuous outrage.”

So in order to follow the rule “When you’re trying to figure out what to believe, you can’t care what other people will think of you,” you can’t worry that you’ll lose status for advocating an unpopular idea, but you also can’t get too excited about being an intellectual rebel or contrarian.

Had I followed this principle, I might have managed to avoid at least a couple of the more embarrassing mistakes I’ve made in my life. Here’s one of them: I started off in college planning on going to medical school, which was not a good idea. Once I got sufficiently tired of my biology classes, I looked around to see what I could do with my life, and noticed I was doing really well in my philosophy classes. So an obvious choice (or so it seemed to me) was going to graduate school in philosophy.

However, there was a problem: my initial contact with philosophy left me with a somewhat dim view of the field, or at least a dim view of academic philosophy. So I resisted the idea of going to graduate school in philosophy for a long time. But eventually, seeing no other options (according to my peculiar, upper-middle class notions about what constituted an “option”) I gave in. Once I’d resigned myself to going to graduate school in philosophy, I began to worry about how I would justify my choice to others, and began thinking nicer thoughts about philosophy.

When I ultimately left my Ph.D. program after three semesters, it took awhile to adjust to realizing how foolish I’d been. I could have saved myself a lot of time if I’d stopped and noticed, “no, you don’t actually think this is a great idea, you’re just trying to imagine how you’ll justify this decision to other people.”