A Process for Dealing with Motivated Reasoning

Epistemic sta­tus: Had a valley of bad ra­tio­nal­ity prob­lem, did some think­ing about how to solve it in my own case, thought the re­sults might be use­ful for other peo­ple who have the same prob­lem. (To be clear, I don’t yet claim this solu­tion works, I haven’t re­ally tried it yet, but it seems like the kind of thing that might work. The claim here is “this might be worth try­ing” rather than “this works” and the tar­get au­di­ence is my­self and peo­ple suffi­ciently similar to me)

Epistemic effort: Maybe 2-3 hours of writ­ing cu­mu­la­tively. Thanks to Elephan­tiskon for feed­back.

EDIT: Here’s a sum­mary of the post, which I was told was much clearer than the origi­nal:

There’s a thing peo­ple (in­clud­ing me) some­times do, where they (un­re­flec­tively) as­sume that the con­clu­sions of mo­ti­vated rea­son­ing are always wrong, and dis­miss them out of hand. That seems like a bad plan. In­stead, try go­ing into Sys­tem II mode and re­ex­am­in­ing con­clu­sions you think might be the re­sult of mo­ti­vated rea­son­ing, rather than im­me­di­ately dis­miss­ing them. This isn’t to say that Sys­tem II pro­cesses are com­pletely im­mune to mo­ti­vated rea­son­ing, far from it, but “ap­ply ex­tra scrutiny” seems like a bet­ter strat­egy than “dis­miss out of hand.”
This habit of [au­to­mat­i­cally dis­miss­ing any­thing that seems like it might be the re­sult of mo­ti­vated rea­son­ing] can lead to de­ci­sion paral­y­sis and patholog­i­cal self-doubt. The point of this post is to cor­rect for that some­what.

It some­times seems like a sub­stan­tial frac­tion of my rea­son­ing is driven by an aware­ness of the in­sidious­ness of mo­ti­vated rea­son­ing, and a de­sire to avoid it. In par­tic­u­lar, I some­times have thoughts like the fol­low­ing:

Brain: I would like to go to philos­o­phy grad­u­ate school.
Me: But 80,000 hours says it’s usu­ally not a good idea to go to philos­o­phy grad­u­ate school...
Brain: But none of the other op­tions I can come up with seem es­pe­cially re­al­is­tic. Com­bine that with the fact that grad school can be made to be a good path if you do it right, and it ac­tu­ally seems like a pretty good op­tion.
Me: But I started off want­ing to go to grad­u­ate school be­cause it seemed like fun. Seems pretty sus­pi­cious that it would turn out to be my best op­tion, de­spite the fact that, ac­cord­ing to 80k, for most EAs it’s not. Are you sure you’re not en­gag­ing in mo­ti­vated rea­son­ing?
Brain: Are you sure you’re not en­gag­ing in mo­ti­vated rea­son­ing? Are you sure you’re not just try­ing to make a de­ci­sion that’s so­cially defen­si­ble to our in-group (other EAs)?

Um, what??

I seem to be rea­son­ing as if there were a gen­eral prin­ci­ple that, if there’s a plau­si­ble way that I might be us­ing mo­ti­vated rea­son­ing to come to a par­tic­u­lar con­clu­sion, that con­clu­sion must be wrong. In other words, my brain has de­cided that any­thing tagged as “prob­a­bly based on mo­ti­vated rea­son­ing” is false. Another way of think­ing about this is that I’m us­ing “that’s prob­a­bly based on mo­ti­vated rea­son­ing” as a fully gen­eral ex­cuse against my­self.

While be­ing averse to mo­ti­vated rea­son­ing seems rea­son­able, the gen­eral prin­ci­ple that any con­clu­sion ar­rived at by mo­ti­vated rea­son­ing must be false seems ridicu­lous when I write it out. Ob­vi­ously, my com­ing to a con­clu­sion by mo­ti­vated rea­son­ing doesn’t have any effect on whether or not it’s true—it’s already true or already false, no mat­ter what sort of rea­son­ing I used.[1]

A bet­ter pro­cess for deal­ing with mo­ti­vated rea­son­ing might be:

If

1. Get­ting the right (true) an­swer mat­ters in a par­tic­u­lar case, and

2. There’s a plau­si­ble rea­son to sus­pect that I might be com­ing to my an­swer in that case on the ba­sis of mo­ti­vated rea­son­ing,

then it is worth it to:

a. Go into sys­tem-II/​slow think­ing/​man­ual mode/​what­ever.

b. Ask your­self what you would do if your (po­ten­tially mo­ti­vated-rea­son­ing-gen­er­ated) con­clu­sion were true, and what you would do if it were false (c.f. leave a line of re­treat, split-and-com­mit, see the dark world).[2]

c. Use ex­plicit, gears-based, mod­els-based rea­son­ing to check my con­clu­sion. (e.g. list out all im­por­tant con­sid­er­a­tions in a doc, if the prob­lem is quan­ti­ta­tive make a spread­sheet, etc.)

Then, what­ever an­swer comes out of that pro­cess, trust it un­til new in­for­ma­tion comes along, then re­run the pro­cess.

To sum up: if you have a habit of dis­miss­ing a be­lief when you no­tice it might be the re­sult of mo­ti­vated rea­son­ing, it might be worth it to re­place that habit with the habit of reeval­u­at­ing the be­lief in­stead.


[1]To be clear, I do think the ba­sic idea that [if some­thing seems to be the re­sult of mo­ti­vated rea­son­ing, that’s ev­i­dence against it] is prob­a­bly cor­rect. I just think that you shouldn’t up­date all the way to this is false, since the thing might still be true.

[2]I think the ba­sic idea be­hind why rea­son­ing hy­po­thet­i­cally in this way helps is this: it takes the fo­cus off of de­cid­ing whether X is true (which is the step that’s sus­pect) and puts it onto de­cid­ing what that would lead to. I like to think of it as first “fix­ing” X as true, and then “fix­ing” X as false.