Red Flags for Rationalization

Pre­vi­ously: Why a New Ra­tion­al­iza­tion Se­quence?

What are Red Flags?

A red flag is a warn­ing sign that you may have ra­tio­nal­ized. Some­thing that is prac­ti­cal to ob­serve and more likely in the ra­tio­nal­iza­tion case than the non-ra­tio­nal­iza­tion.

Some are things which are likely to cause ra­tio­nal­iza­tion. Others are likely to be caused by it. One on this list is even based on com­mon cause. (I don’t have any based on se­lec­tion on a com­mon effect, but in the­ory there could be.)

How to Use Red Flags

See­ing a red flag doesn’t nec­es­sar­ily mean that you have ra­tio­nal­ized, but it’s ev­i­dence. Like­wise, just be­cause you’ve ra­tio­nal­ized doesn’t mean your con­clu­sion is wrong, only that it’s not as sup­ported as you thought.

So when one of these flags raises, don’t give up on ever dis­cov­er­ing truth; don’t stop-halt-catch-fire; definitely don’t in­vert your con­clu­sion.

Just slow down. Take the hy­poth­e­sis that you’re ra­tio­nal­iz­ing se­ri­ously and look for ways to test it. The rest of this se­quence will offer tools for the pur­pose, but just pay­ing at­ten­tion is half the bat­tle.

A lot of these things can be pre­sent to a greater or lesser de­gree, so you’ll want to set thresh­olds. I’d guess an op­ti­mal set­ting has about 13 of trig­gers be true. High enough that you keep do­ing your checks se­ri­ously, but low be­cause the pay­off ma­trix is quite asym­met­ri­cal.

Ba­si­cally use these as trig­ger-ac­tion plan­ning. Trig­ger: any­thing on this list. Ac­tion: spend five sec­onds do­ing your agenty best to worry about ra­tio­nal­iza­tion.

Con­flict of Interest

This is a clas­sic rea­son to dis­trust some­one else’s rea­son­ing. If they have some­thing to gain from you be­liev­ing a con­clu­sion sep­a­rate from that con­clu­sion be­ing true, you have rea­son to be sus­pi­cious. But what does it mean for you to gain from you be­liev­ing some­thing apart from it be­ing true?

Not Such Great Liars

Prob­a­bly the sim­plest rea­son is that you need to de­ceive some­one else. If you’re not a prac­ticed liar, the eas­iest way to do this is to de­ceive your­self.

Sim­ple ex­am­ple: you’re run­ning late and need to give an es­ti­mate of when you’ll ar­rive. If you say “ten min­utes late” and ar­rive twenty min­utes late, it looks like you hit an­other ten min­utes’ worth of bad luck, whereas say­ing “twenty min­utes” looks like your fault. You’re not good at straight-up ly­ing, but if you can con­vince your­self you’ll only be ten min­utes late, all is well.

Unen­dorsed Values

Values aren’t sim­ple, and you aren’t always in agree­ment with your­self. Let’s illus­trate this with ex­am­ples:

Per­haps you be­lieve that health and long life are more im­por­tant that fleet­ing plea­sures like ice cream, but there’s a part of you that has a short time prefer­ence and knows ice cream is deli­cious. That part would love to con­vince the rest of you of a the­ory of nu­tri­tion that holds ice cream as healthy.

Per­haps you be­lieve that you should fol­low sci­en­tific re­sults wher­ever the ev­i­dence leads you, but it seems to be lead­ing some­place that a pro­fes­sor at Duke pre­dicted a few months ago, and there’s a part of you that hates Duke. If that part can con­vince the rest of you that the data is wrong, you won’t have to ad­mit that some­body at Duke was right.

Wish­ful Thinking

A clas­sic cause of ra­tio­nal­iza­tion. Ex­pect­ing good things feels bet­ter than ex­pect­ing bad things, so you’ll want to be­lieve it will all come out all right.

Catas­tro­phiz­ing Thinking

The op­po­site of wish­ful think­ing. I’m not sure what the psy­cholog­i­cal root is, but it seems com­mon in our com­mu­nity.

Con­flict of Ego

The con­clu­sion is: there­fore I am a good per­son. The virtues I am strong at are the most im­por­tant, and those I am weak at are the least. The work I do is vi­tal to up­hold­ing civ­i­liza­tion. The ac­tions I took were jus­tified. See Foster & Misra (2013) on Cog­ni­tive Dis­so­nance and Affect.

Var­i­ant: there­fore we are good peo­ple. Where “we” can be any group mem­ber­ship the thinker feels strongly about. Note that the in­di­vi­d­ual need not have been in­volved in the virtue, work or ac­tion to feel pres­sure to ra­tio­nal­ize it.

This is par­tic­u­larly in­sidious when “we” is defined partly by a large set of be­liefs, such as the So­cial Jus­tice Com­mu­nity or the Liber­tar­ian Party. Then it is tempt­ing to ra­tio­nal­ize that ev­ery po­si­tion “we” have ever taken was cor­rect.

In my ex­pe­rience, the com­mu­nal var­i­ant is more com­mon than the in­di­vi­d­ual one, but that may be an ar­ti­fact of my so­cial cir­cles.

Reluc­tance to Test

If you have an op­por­tu­nity to gain more ev­i­dence on the ques­tion and feel re­luc­tant, this is a bad sign. This one is illus­trated by Harry and Draco dis­cussing Hermione in HPMOR .

Sus­pi­cious Timing

Did you stop look­ing for al­ter­na­tives as soon as you found this one?

Similarly, did you spend a lot longer look­ing for ev­i­dence on one side than the other?

Failure to Update

This was ba­si­cally cov­ered in Up­date Your­self In­cre­men­tally and One Ar­gu­ment Against An Army. The pat­tern of failing to up­date be­cause the weight of ev­i­dence points the other way is a rec­og­niz­able one.

The Feel­ing of Do­ing It

For some peo­ple, ra­tio­nal­iza­tion has a dis­tinct sub­jec­tive ex­pe­rience that you can train your­self to rec­og­nize. Eliezer writes about it in Sin­gle­think and later refers to it as “don’t even start to ra­tio­nal­ize”.

If any­one has ex­pe­rience try­ing to de­velop this skill, please leave a com­ment.

Agree­ing with Idiots

True, re­versed stu­pidity is not in­tel­li­gence. Nev­er­the­less, if you find your­self ar­riv­ing at the same con­clu­sion as a large group of idiots, this is a sus­pi­cious ob­ser­va­tion that calls for an ex­pla­na­tion. Pos­si­bil­ities in­clude:

  • It’s a co­in­ci­dence: they got lucky. This can hap­pen, but the more com­plex the con­clu­sion, the less likely.

  • They’re not all that idiotic. Peo­ple with ter­rible over­all epistemics can still have solid un­der­stand­ing within their com­fort zones.

  • It’s not re­ally the same con­clu­sion; it just sounds it when both are sum­ma­rized poorly.

  • You and they ra­tio­nal­ized the con­clu­sion fol­low­ing the same in­ter­est.

Nat­u­rally, it is this last pos­si­bil­ity that con­cerns us. The less likely the first three, the more wor­ry­ing the last one.

Disagree­ing with Experts

If some­one who is clearly es­tab­lished as an ex­pert in the field (pos­si­bly by hav­ing no­table achieve­ments in it) dis­agrees with you, this is a bad sign. It’s more a warn­ing sign of bad logic in gen­eral than of ra­tio­nal­iza­tion in par­tic­u­lar, but ra­tio­nal­iza­tion is a com­mon cause of bad logic, and many of the same checks ap­ply.


Next: Avoid­ing Ra­tion­al­iza­tion