Where Recursive Justification Hits Bottom

Why do I be­lieve that the Sun will rise to­mor­row?

Be­cause I’ve seen the Sun rise on thou­sands of pre­vi­ous days.

Ah… but why do I be­lieve the fu­ture will be like the past?

Even if I go past the mere sur­face ob­ser­va­tion of the Sun ris­ing, to the ap­par­ently uni­ver­sal and ex­cep­tion­less laws of grav­i­ta­tion and nu­clear physics, then I am still left with the ques­tion: “Why do I be­lieve this will also be true to­mor­row?”

I could ap­peal to Oc­cam’s Ra­zor, the prin­ci­ple of us­ing the sim­plest the­ory that fits the facts… but why be­lieve in Oc­cam’s Ra­zor? Be­cause it’s been suc­cess­ful on past prob­lems? But who says that this means Oc­cam’s Ra­zor will work to­mor­row?

And lo, the one said:

“Science also de­pends on un­jus­tified as­sump­tions. Thus sci­ence is ul­ti­mately based on faith, so don’t you crit­i­cize me for be­liev­ing in [silly-be­lief-#238721].”

As I’ve pre­vi­ously ob­served:

It’s a most pe­cu­liar psy­chol­ogy—this busi­ness of “Science is based on faith too, so there!” Typ­i­cally this is said by peo­ple who claim that faith is a good thing. Then why do they say “Science is based on faith too!” in that an­gry-triumphal tone, rather than as a com­pli­ment?

Ar­gu­ing that you should be im­mune to crit­i­cism is rarely a good sign.

But this doesn’t an­swer the le­gi­t­i­mate philo­soph­i­cal dilemma: If ev­ery be­lief must be jus­tified, and those jus­tifi­ca­tions in turn must be jus­tified, then how is the in­finite re­cur­sion ter­mi­nated?

And if you’re al­lowed to end in some­thing as­sumed-with­out-jus­tifi­ca­tion, then why aren’t you al­lowed to as­sume any­thing with­out jus­tifi­ca­tion?

A similar cri­tique is some­times lev­eled against Bayesi­anism—that it re­quires as­sum­ing some prior—by peo­ple who ap­par­ently think that the prob­lem of in­duc­tion is a par­tic­u­lar prob­lem of Bayesi­anism, which you can avoid by us­ing clas­si­cal statis­tics. I will speak of this later, per­haps.

But first, let it be clearly ad­mit­ted that the rules of Bayesian up­dat­ing, do not of them­selves solve the prob­lem of in­duc­tion.

Sup­pose you’re draw­ing red and white balls from an urn. You ob­serve that, of the first 9 balls, 3 are red and 6 are white. What is the prob­a­bil­ity that the next ball drawn will be red?

That de­pends on your prior be­liefs about the urn. If you think the urn-maker gen­er­ated a uniform ran­dom num­ber be­tween 0 and 1, and used that num­ber as the fixed prob­a­bil­ity of each ball be­ing red, then the an­swer is 411 (by Laplace’s Law of Suc­ces­sion). If you think the urn origi­nally con­tained 10 red balls and 10 white balls, then the an­swer is 711.

Which goes to say that, with the right prior—or rather the wrong prior—the chance of the Sun ris­ing to­mor­row, would seem to go down with each suc­ceed­ing day… if you were ab­solutely cer­tain, a pri­ori, that there was a great bar­rel out there from which, on each day, there was drawn a lit­tle slip of pa­per that de­ter­mined whether the Sun rose or not; and that the bar­rel con­tained only a limited num­ber of slips say­ing “Yes”, and the slips were drawn with­out re­place­ment.

There are pos­si­ble minds in mind de­sign space who have anti-Oc­camian and anti-Lapla­cian pri­ors; they be­lieve that sim­pler the­o­ries are less likely to be cor­rect, and that the more of­ten some­thing hap­pens, the less likely it is to hap­pen again.

And when you ask these strange be­ings why they keep us­ing pri­ors that never seem to work in real life… they re­ply, “Be­cause it’s never worked for us be­fore!”

Now, one les­son you might de­rive from this, is “Don’t be born with a stupid prior.” This is an amaz­ingly helpful prin­ci­ple on many real-world prob­lems, but I doubt it will satisfy philoso­phers.

Here’s how I treat this prob­lem my­self: I try to ap­proach ques­tions like “Should I trust my brain?” or “Should I trust Oc­cam’s Ra­zor?” as though they were noth­ing spe­cial— or at least, noth­ing spe­cial as deep ques­tions go.

Should I trust Oc­cam’s Ra­zor? Well, how well does (any par­tic­u­lar ver­sion of) Oc­cam’s Ra­zor seem to work in prac­tice? What kind of prob­a­bil­ity-the­o­retic jus­tifi­ca­tions can I find for it? When I look at the uni­verse, does it seem like the kind of uni­verse in which Oc­cam’s Ra­zor would work well?

Should I trust my brain? Ob­vi­ously not; it doesn’t always work. But nonethe­less, the hu­man brain seems much more pow­er­ful than the most so­phis­ti­cated com­puter pro­grams I could con­sider trust­ing oth­er­wise. How well does my brain work in prac­tice, on which sorts of prob­lems?

When I ex­am­ine the causal his­tory of my brain—its ori­gins in nat­u­ral se­lec­tion—I find, on the one hand, all sorts of spe­cific rea­sons for doubt; my brain was op­ti­mized to run on the an­ces­tral sa­vanna, not to do math. But on the other hand, it’s also clear why, loosely speak­ing, it’s pos­si­ble that the brain re­ally could work. Nat­u­ral se­lec­tion would have quickly elimi­nated brains so com­pletely un­suited to rea­son­ing, so anti-helpful, as anti-Oc­camian or anti-Lapla­cian pri­ors.

So what I did in prac­tice, does not amount to declar­ing a sud­den halt to ques­tion­ing and jus­tifi­ca­tion. I’m not halt­ing the chain of ex­am­i­na­tion at the point that I en­counter Oc­cam’s Ra­zor, or my brain, or some other un­ques­tion­able. The chain of ex­am­i­na­tion con­tinues—but it con­tinues, un­avoid­ably, us­ing my cur­rent brain and my cur­rent grasp on rea­son­ing tech­niques. What else could I pos­si­bly use?

In­deed, no mat­ter what I did with this dilemma, it would be me do­ing it. Even if I trusted some­thing else, like some com­puter pro­gram, it would be my own de­ci­sion to trust it.

The tech­nique of re­ject­ing be­liefs that have ab­solutely no jus­tifi­ca­tion, is in gen­eral an ex­tremely im­por­tant one. I some­times say that the fun­da­men­tal ques­tion of ra­tio­nal­ity is “Why do you be­lieve what you be­lieve?” I don’t even want to say some­thing that sounds like it might al­low a sin­gle ex­cep­tion to the rule that ev­ery­thing needs jus­tifi­ca­tion.

Which is, it­self, a dan­ger­ous sort of mo­ti­va­tion; you can’t always avoid ev­ery­thing that might be risky, and when some­one an­noys you by say­ing some­thing silly, you can’t re­verse that stu­pidity to ar­rive at in­tel­li­gence.

But I would nonethe­less em­pha­size the differ­ence be­tween say­ing:

“Here is this as­sump­tion I can­not jus­tify, which must be sim­ply taken, and not fur­ther ex­am­ined.”

Ver­sus say­ing:

“Here the in­quiry con­tinues to ex­am­ine this as­sump­tion, with the full force of my pre­sent in­tel­li­gence—as op­posed to the full force of some­thing else, like a ran­dom num­ber gen­er­a­tor or a magic 8-ball—even though my pre­sent in­tel­li­gence hap­pens to be founded on this as­sump­tion.”

Still… wouldn’t it be nice if we could ex­am­ine the prob­lem of how much to trust our brains with­out us­ing our cur­rent in­tel­li­gence? Wouldn’t it be nice if we could ex­am­ine the prob­lem of how to think, with­out us­ing our cur­rent grasp of ra­tio­nal­ity?

When you phrase it that way, it starts look­ing like the an­swer might be “No”.

E. T. Jaynes used to say that you must always use all the in­for­ma­tion available to you—he was a Bayesian prob­a­bil­ity the­o­rist, and had to clean up the para­doxes other peo­ple gen­er­ated when they used differ­ent in­for­ma­tion at differ­ent points in their calcu­la­tions. The prin­ci­ple of “Always put forth your true best effort” has at least as much ap­peal as “Never do any­thing that might look cir­cu­lar.” After all, the al­ter­na­tive to putting forth your best effort is pre­sum­ably do­ing less than your best.

But still… wouldn’t it be nice if there were some way to jus­tify us­ing Oc­cam’s Ra­zor, or jus­tify pre­dict­ing that the fu­ture will re­sem­ble the past, with­out as­sum­ing that those meth­ods of rea­son­ing which have worked on pre­vi­ous oc­ca­sions are bet­ter than those which have con­tinu­ally failed?

Wouldn’t it be nice if there were some chain of jus­tifi­ca­tions that nei­ther ended in an un­ex­am­inable as­sump­tion, nor was forced to ex­am­ine it­self un­der its own rules, but, in­stead, could be ex­plained start­ing from ab­solute scratch to an ideal philos­o­phy stu­dent of perfect empti­ness?

Well, I’d cer­tainly be in­ter­ested, but I don’t ex­pect to see it done any time soon. I’ve ar­gued el­se­where in sev­eral places against the idea that you can have a perfectly empty ghost-in-the-ma­chine; there is no ar­gu­ment that you can ex­plain to a rock.

Even if some­one cracks the First Cause prob­lem and comes up with the ac­tual rea­son the uni­verse is sim­ple, which does not it­self pre­sume a sim­ple uni­verse… then I would still ex­pect that the ex­pla­na­tion could only be un­der­stood by a mind­ful listener, and not by, say, a rock. A listener that didn’t start out already im­ple­ment­ing modus po­nens might be out of luck.

So, at the end of the day, what hap­pens when some­one keeps ask­ing me “Why do you be­lieve what you be­lieve?”

At pre­sent, I start go­ing around in a loop at the point where I ex­plain, “I pre­dict the fu­ture as though it will re­sem­ble the past on the sim­plest and most sta­ble level of or­ga­ni­za­tion I can iden­tify, be­cause pre­vi­ously, this rule has usu­ally worked to gen­er­ate good re­sults; and us­ing the sim­ple as­sump­tion of a sim­ple uni­verse, I can see why it gen­er­ates good re­sults; and I can even see how my brain might have evolved to be able to ob­serve the uni­verse with some de­gree of ac­cu­racy, if my ob­ser­va­tions are cor­rect.”

But then… haven’t I just li­censed cir­cu­lar logic?

Ac­tu­ally, I’ve just li­censed re­flect­ing on your mind’s de­gree of trust­wor­thi­ness, us­ing your cur­rent mind as op­posed to some­thing else.

Reflec­tion of this sort is, in­deed, the rea­son we re­ject most cir­cu­lar logic in the first place. We want to have a co­her­ent causal story about how our mind comes to know some­thing, a story that ex­plains how the pro­cess we used to ar­rive at our be­liefs, is it­self trust­wor­thy. This is the es­sen­tial de­mand be­hind the ra­tio­nal­ist’s fun­da­men­tal ques­tion, “Why do you be­lieve what you be­lieve?”

Now sup­pose you write on a sheet of pa­per: “(1) Every­thing on this sheet of pa­per is true, (2) The mass of a he­lium atom is 20 grams.” If that trick ac­tu­ally worked in real life, you would be able to know the true mass of a he­lium atom just by be­liev­ing some cir­cu­lar logic which as­serted it. Which would en­able you to ar­rive at a true map of the uni­verse sit­ting in your liv­ing room with the blinds drawn. Which would vi­o­late the sec­ond law of ther­mo­dy­nam­ics by gen­er­at­ing in­for­ma­tion from nowhere. Which would not be a plau­si­ble story about how your mind could end up be­liev­ing some­thing true.

Even if you started out be­liev­ing the sheet of pa­per, it would not seem that you had any rea­son for why the pa­per cor­re­sponded to re­al­ity. It would just be a mirac­u­lous co­in­ci­dence that (a) the mass of a he­lium atom was 20 grams, and (b) the pa­per hap­pened to say so.

Believ­ing, in gen­eral, self-val­i­dat­ing state­ment sets, does not seem like it should work to map ex­ter­nal re­al­ity—when we re­flect on it as a causal story about minds—us­ing, of course, our cur­rent minds to do so.

But what about evolv­ing to give more cre­dence to sim­pler be­liefs, and to be­lieve that al­gorithms which have worked in the past are more likely to work in the fu­ture? Even when we re­flect on this as a causal story of the ori­gin of minds, it still seems like this could plau­si­bly work to map re­al­ity.

And what about trust­ing re­flec­tive co­her­ence in gen­eral? Wouldn’t most pos­si­ble minds, ran­domly gen­er­ated and al­lowed to set­tle into a state of re­flec­tive co­her­ence, be in­cor­rect? Ah, but we evolved by nat­u­ral se­lec­tion; we were not gen­er­ated ran­domly.

If trust­ing this ar­gu­ment seems wor­ri­some to you, then for­get about the prob­lem of philo­soph­i­cal jus­tifi­ca­tions, and ask your­self whether it’s re­ally truly true.

(You will, of course, use your own mind to do so.)

Is this the same as the one who says, “I be­lieve that the Bible is the word of God, be­cause the Bible says so”?

Couldn’t they ar­gue that their blind faith must also have been placed in them by God, and is there­fore trust­wor­thy?

In point of fact, when re­li­gious peo­ple fi­nally come to re­ject the Bible, they do not do so by mag­i­cally jump­ing to a non-re­li­gious state of pure empti­ness, and then eval­u­at­ing their re­li­gious be­liefs in that non-re­li­gious state of mind, and then jump­ing back to a new state with their re­li­gious be­liefs re­moved.

Peo­ple go from be­ing re­li­gious, to be­ing non-re­li­gious, be­cause even in a re­li­gious state of mind, doubt seeps in. They no­tice their prayers (and worse, the prayers of seem­ingly much wor­thier peo­ple) are not be­ing an­swered. They no­tice that God, who speaks to them in their heart in or­der to provide seem­ingly con­sol­ing an­swers about the uni­verse, is not able to tell them the hun­dredth digit of pi (which would be a lot more re­as­sur­ing, if God’s pur­pose were re­as­surance). They ex­am­ine the story of God’s cre­ation of the world and damna­tion of un­be­liev­ers, and it doesn’t seem to make sense even un­der their own re­li­gious premises.

Be­ing re­li­gious doesn’t make you less than hu­man. Your brain still has the abil­ities of a hu­man brain. The dan­ger­ous part is that be­ing re­li­gious might stop you from ap­ply­ing those na­tive abil­ities to your re­li­gion—stop you from re­flect­ing fully on your­self. Peo­ple don’t heal their er­rors by re­set­ting them­selves to an ideal philoso­pher of pure empti­ness and re­con­sid­er­ing all their sen­sory ex­pe­riences from scratch. They heal them­selves by be­com­ing more will­ing to ques­tion their cur­rent be­liefs, us­ing more of the power of their cur­rent mind.

This is why it’s im­por­tant to dis­t­in­guish be­tween re­flect­ing on your mind us­ing your mind (it’s not like you can use any­thing else) and hav­ing an un­ques­tion­able as­sump­tion that you can’t re­flect on.

“I be­lieve that the Bible is the word of God, be­cause the Bible says so.” Well, if the Bible were an as­tound­ingly re­li­able source of in­for­ma­tion about all other mat­ters, if it had not said that grasshop­pers had four legs or that the uni­verse was cre­ated in six days, but had in­stead con­tained the Pe­ri­odic Table of Ele­ments cen­turies be­fore chem­istry—if the Bible had served us only well and told us only truth—then we might, in fact, be in­clined to take se­ri­ously the ad­di­tional state­ment in the Bible, that the Bible had been gen­er­ated by God. We might not trust it en­tirely, be­cause it could also be aliens or the Dark Lords of the Ma­trix, but it would at least be worth tak­ing se­ri­ously.

Like­wise, if ev­ery­thing else that priests had told us, turned out to be true, we might take more se­ri­ously their state­ment that faith had been placed in us by God and was a sys­tem­at­i­cally trust­wor­thy source—es­pe­cially if peo­ple could di­v­ine the hun­dredth digit of pi by faith as well.

So the im­por­tant part of ap­pre­ci­at­ing the cir­cu­lar­ity of “I be­lieve that the Bible is the word of God, be­cause the Bible says so,” is not so much that you are go­ing to re­ject the idea of re­flect­ing on your mind us­ing your cur­rent mind. But, rather, that you re­al­ize that any­thing which calls into ques­tion the Bible’s trust­wor­thi­ness, also calls into ques­tion the Bible’s as­surance of its trust­wor­thi­ness.

This ap­plies to ra­tio­nal­ity too: if the fu­ture should cease to re­sem­ble the past—even on its low­est and sim­plest and most sta­ble ob­served lev­els of or­ga­ni­za­tion—well, mostly, I’d be dead, be­cause my brain’s pro­cesses re­quire a lawful uni­verse where chem­istry goes on work­ing. But if some­how I sur­vived, then I would have to start ques­tion­ing the prin­ci­ple that the fu­ture should be pre­dicted to be like the past.

But for now… what’s the al­ter­na­tive to say­ing, “I’m go­ing to be­lieve that the fu­ture will be like the past on the most sta­ble level of or­ga­ni­za­tion I can iden­tify, be­cause that’s pre­vi­ously worked bet­ter for me than any other al­gorithm I’ve tried”?

Is it say­ing, “I’m go­ing to be­lieve that the fu­ture will not be like the past, be­cause that al­gorithm has always failed be­fore”?

At this point I feel obliged to drag up the point that ra­tio­nal­ists are not out to win ar­gu­ments with ideal philoso­phers of perfect empti­ness; we are sim­ply out to win. For which pur­pose we want to get as close to the truth as we can pos­si­bly man­age. So at the end of the day, I em­brace the prin­ci­ple: “Ques­tion your brain, ques­tion your in­tu­itions, ques­tion your prin­ci­ples of ra­tio­nal­ity, us­ing the full cur­rent force of your mind, and do­ing the best you can do at ev­ery point.

If one of your cur­rent prin­ci­ples does come up want­ing—ac­cord­ing to your own mind’s ex­am­i­na­tion, since you can’t step out­side your­self—then change it! And then go back and look at things again, us­ing your new im­proved prin­ci­ples.

The point is not to be re­flec­tively con­sis­tent. The point is to win. But if you look at your­self and play to win, you are mak­ing your­self more re­flec­tively con­sis­tent—that’s what it means to “play to win” while “look­ing at your­self”.

Every­thing, with­out ex­cep­tion, needs jus­tifi­ca­tion. Some­times—un­avoid­ably, as far as I can tell—those jus­tifi­ca­tions will go around in re­flec­tive loops. I do think that re­flec­tive loops have a meta-char­ac­ter which should en­able one to dis­t­in­guish them, by com­mon sense, from cir­cu­lar log­ics. But any­one se­ri­ously con­sid­er­ing a cir­cu­lar logic in the first place, is prob­a­bly out to lunch in mat­ters of ra­tio­nal­ity; and will sim­ply in­sist that their cir­cu­lar logic is a “re­flec­tive loop” even if it con­sists of a sin­gle scrap of pa­per say­ing “Trust me”. Well, you can’t always op­ti­mize your ra­tio­nal­ity tech­niques ac­cord­ing to the sole con­sid­er­a­tion of pre­vent­ing those bent on self-de­struc­tion from abus­ing them.

The im­por­tant thing is to hold noth­ing back in your crit­i­cisms of how to crit­i­cize; nor should you re­gard the un­avoid­abil­ity of loopy jus­tifi­ca­tions as a war­rant of im­mu­nity from ques­tion­ing.

Always ap­ply full force, whether it loops or not—do the best you can pos­si­bly do, whether it loops or not—and play, ul­ti­mately, to win.