Why a New Rationalization Sequence?

This is the first in a five-post mini-se­quence about ra­tio­nal­iza­tion, which I in­tend to post one-per-day. And you may ask, why should we have such a se­quence?

What is Ra­tion­al­iza­tion and Why is it Bad?

For those of you just tun­ing in, ra­tio­nal­iza­tion is when you take a con­clu­sion you want to reach and try to come up with a ar­gu­ment that con­cludes it. The ar­gu­ment looks very similar to one in which you started from data, eval­u­ated as well as you could, and reached this con­clu­sion nat­u­rally. Al­most always similar enough to fool the ca­sual ob­server, and of­ten similar enough to fool your­self.

If you’re de­liber­ately ra­tio­nal­iz­ing for an out­side au­di­ence, that’s out-of-scope for this se­quence. All the usual ethics and game the­ory ap­ply.

But if you’re in­vol­un­tar­ily ra­tio­nal­iz­ing and fool­ing your­self, then you’ve failed at epistemics. And your arts have turned against you. Know a lot about sci­en­tific failures? Now you can find them in all the stud­ies you didn’t like!

Didn’t Eliezer Already Do This?

Eliezer wrote the against ra­tio­nal­iza­tion se­quence back in 20078. If you haven’t read it, you prob­a­bly should. It does a good job of de­scribing what ra­tio­nal­iza­tion is, how it can hap­pen, and how bad it can be. It does not provide a lot of tools for you to use in pro­tect­ing your­self from ra­tio­nal­iza­tion. That’s what I’ll be fo­cus­ing on here.

And, be­sides, if we don’t re­visit a topic this im­por­tant ev­ery decade or so with new de­vel­op­ments, then what is this com­mu­nity for?

Is There Hope?

Pe­ri­od­i­cally, I hear some­one give up on log­i­cal ar­gu­ment com­pletely. “You can find an ar­gu­ment for any­thing,” they say, “For­get logic. Trust [ your gut /​ tra­di­tion /​ me ] in­stead.” Which brushes over the ques­tion of whether the pro­posed al­ter­na­tive is any bet­ter. There is no royal road to knowl­edge.

Still, the ques­tion needs an­swer­ing. If ra­tio­nal­iza­tion looks just like logic, can we ever es­cape Carte­sian Doubt?

The Psy­chi­a­trist Paradox

A com­mon delu­sion among grandiose schizophren­ics in in­sti­tu­tions is that they are them­selves psy­chi­a­trists. Con­sider a par­tic­u­larly un­der­funded men­tal hos­pi­tal, in which the ma­jor­ity of peo­ple who “know” them­selves to be psy­chi­a­trists are wrong. No ex­am­i­na­tion of the ev­i­dence will con­vince them oth­er­wise. No mat­ter how over­whelming, some rea­son to dis­be­lieve will be found.

Given this, should any amount of ev­i­dence suffice to con­vince you that you are such a psy­chi­a­trist?

I am not aware of any re­s­olu­tion to this para­dox.

The Dream­ing Paradox

But the Psy­chi­a­trist Para­dox is based on an ab­solute fixed be­lief and to­tal ra­tio­nal­iza­tion as seen in the­o­ret­i­cally ideal schizophren­ics. (How closely do real-world schizophren­ics ap­prox­i­mate this ideal? That ques­tion is be­yond the scope of this doc­u­ment.) Let’s con­sider peo­ple a lit­tle more re­al­ity-af­fili­ated: the dream­ing.

Given that any ev­i­dence of awak­e­ness is a thing that can be dreamed, should you ever be more than 90% con­fi­dent you’re awake? (As­sum­ing 16 hours awake and 2 dream­ing in a typ­i­cal 24 hour pe­riod.)

(Bor­ing an­swer: for­get con­fi­dence, always act on the as­sump­tion that you’re awake be­cause it’s erring on the side of safety. We’ll come back to this thought.)

(Also bor­ing: most lu­cid dream­ing en­thu­si­asts re­port they do find ev­i­dence of wake­ful­ness or dreami­ness which dreams never forge. As­sume you haven’t found any for your­self.)

Here’s my test: I ask my com­puter to prime fac­tor a large num­ber (around ten digits) and check it by hand. I can dream many things, but I’m not go­ing to dream that my com­puter doesn’t have the fac­tor pro­gram, nor will I for­get how to mul­ti­ply. And I can’t dream that it fac­tored cor­rectly, be­cause I can’t fac­tor num­bers that big.

You can’t out­smart an ab­solute ten­dency to ra­tio­nal­ize, but you can out­smart a finite one. Which, I sus­pect, is what we mostly have.

A Dis­claimer Re­gard­ing Authorship

Be­fore I start on the meat of the se­quence (in the next post) I should make clear that not all these ideas are mine. Un­for­tu­nately, I’ve lost track of which ones are and which aren’t, and of who pro­posed the ones which aren’t. And the ones that aren’t origi­nal to me have still gone through me enough to not be en­tirely as their origi­nal au­thors por­trayed them.

If I tried to un­tan­gle this mess and credit prop­erly, I’d never get this writ­ten. So on­ward. If you wish to fix some bit of cred­it­ing, leave a com­ment and I’ll try to do some­thing sen­si­ble.

Beyond Rationalization

Much of what ap­pears here also ap­plies to or­di­nary mis­takes of logic. I’ll try to tag such as they go.

The sim­plest ideal of think­ing deals ex­ten­sively with un­cer­tainty of ex­ter­nal facts, but trusts its own rea­son­ing im­plic­itly. Directly imi­tat­ing this, when your own rea­son­ing is not 100% trust­wor­thy, is a bad plan. Hope­fully this se­quence will provide some al­ter­na­tives.


Next: Red Flags for Ra­tion­al­iza­tion