Why a New Rationalization Sequence?

This is the first in a five-post mini-sequence about rationalization, which I intend to post one-per-day. And you may ask, why should we have such a sequence?

What is Rationalization and Why is it Bad?

For those of you just tuning in, rationalization is when you take a conclusion you want to reach and try to come up with a argument that concludes it. The argument looks very similar to one in which you started from data, evaluated as well as you could, and reached this conclusion naturally. Almost always similar enough to fool the casual observer, and often similar enough to fool yourself.

If you’re deliberately rationalizing for an outside audience, that’s out-of-scope for this sequence. All the usual ethics and game theory apply.

But if you’re involuntarily rationalizing and fooling yourself, then you’ve failed at epistemics. And your arts have turned against you. Know a lot about scientific failures? Now you can find them in all the studies you didn’t like!

Didn’t Eliezer Already Do This?

Eliezer wrote the against rationalization sequence back in 20078. If you haven’t read it, you probably should. It does a good job of describing what rationalization is, how it can happen, and how bad it can be. It does not provide a lot of tools for you to use in protecting yourself from rationalization. That’s what I’ll be focusing on here.

And, besides, if we don’t revisit a topic this important every decade or so with new developments, then what is this community for?

Is There Hope?

Periodically, I hear someone give up on logical argument completely. “You can find an argument for anything,” they say, “Forget logic. Trust [ your gut /​ tradition /​ me ] instead.” Which brushes over the question of whether the proposed alternative is any better. There is no royal road to knowledge.

Still, the question needs answering. If rationalization looks just like logic, can we ever escape Cartesian Doubt?

The Psychiatrist Paradox

A common delusion among grandiose schizophrenics in institutions is that they are themselves psychiatrists. Consider a particularly underfunded mental hospital, in which the majority of people who “know” themselves to be psychiatrists are wrong. No examination of the evidence will convince them otherwise. No matter how overwhelming, some reason to disbelieve will be found.

Given this, should any amount of evidence suffice to convince you that you are such a psychiatrist?

I am not aware of any resolution to this paradox.

The Dreaming Paradox

But the Psychiatrist Paradox is based on an absolute fixed belief and total rationalization as seen in theoretically ideal schizophrenics. (How closely do real-world schizophrenics approximate this ideal? That question is beyond the scope of this document.) Let’s consider people a little more reality-affiliated: the dreaming.

Given that any evidence of awakeness is a thing that can be dreamed, should you ever be more than 90% confident you’re awake? (Assuming 16 hours awake and 2 dreaming in a typical 24 hour period.)

(Boring answer: forget confidence, always act on the assumption that you’re awake because it’s erring on the side of safety. We’ll come back to this thought.)

(Also boring: most lucid dreaming enthusiasts report they do find evidence of wakefulness or dreaminess which dreams never forge. Assume you haven’t found any for yourself.)

Here’s my test: I ask my computer to prime factor a large number (around ten digits) and check it by hand. I can dream many things, but I’m not going to dream that my computer doesn’t have the factor program, nor will I forget how to multiply. And I can’t dream that it factored correctly, because I can’t factor numbers that big.

You can’t outsmart an absolute tendency to rationalize, but you can outsmart a finite one. Which, I suspect, is what we mostly have.

A Disclaimer Regarding Authorship

Before I start on the meat of the sequence (in the next post) I should make clear that not all these ideas are mine. Unfortunately, I’ve lost track of which ones are and which aren’t, and of who proposed the ones which aren’t. And the ones that aren’t original to me have still gone through me enough to not be entirely as their original authors portrayed them.

If I tried to untangle this mess and credit properly, I’d never get this written. So onward. If you wish to fix some bit of crediting, leave a comment and I’ll try to do something sensible.

Beyond Rationalization

Much of what appears here also applies to ordinary mistakes of logic. I’ll try to tag such as they go.

The simplest ideal of thinking deals extensively with uncertainty of external facts, but trusts its own reasoning implicitly. Directly imitating this, when your own reasoning is not 100% trustworthy, is a bad plan. Hopefully this sequence will provide some alternatives.


Next: Red Flags for Rationalization