Be­lief in Belief

Carl Sagan once told a par­able of a man who comes to us and claims: “There is a dragon in my gar­age.” Fas­cin­at­ing! We reply that we wish to see this dragon—let us set out at once for the gar­age! “But wait,” the claimant says to us, “it is an in­vis­ible dragon.”

Now as Sagan points out, this doesn’t make the hy­po­thesis un­falsifi­able. Per­haps we go to the claimant’s gar­age, and al­though we see no dragon, we hear heavy breath­ing from no vis­ible source; foot­prints mys­ter­i­ously ap­pear on the ground; and in­stru­ments show that some­thing in the gar­age is con­sum­ing oxy­gen and breath­ing out car­bon di­ox­ide.

But now sup­pose that we say to the claimant, “Okay, we’ll visit the gar­age and see if we can hear heavy breath­ing,” and the claimant quickly says no, it’s an in­aud­ible dragon. We pro­pose to meas­ure car­bon di­ox­ide in the air, and the claimant says the dragon does not breathe. We pro­pose to toss a bag of flour into the air to see if it out­lines an in­vis­ible dragon, and the claimant im­me­di­ately says, “The dragon is per­meable to flour.”

Carl Sagan used this par­able to il­lus­trate the clas­sic moral that poor hy­po­theses need to do fast foot­work to avoid falsi­fic­a­tion. But I tell this par­able to make a dif­fer­ent point: The claimant must have an ac­cur­ate model of the situ­ation some­where in his mind, be­cause he can an­ti­cip­ate, in ad­vance, ex­actly which ex­per­i­mental res­ults he’ll need to ex­cuse.

Some philo­soph­ers have been much con­fused by such scen­arios, ask­ing, “Does the claimant really be­lieve there’s a dragon present, or not?” As if the hu­man brain only had enough disk space to rep­res­ent one be­lief at a time! Real minds are more tangled than that. As dis­cussed in yes­ter­day’s post, there are dif­fer­ent types of be­lief; not all be­liefs are dir­ect an­ti­cip­a­tions. The claimant clearly does not an­ti­cip­ate see­ing any­thing un­usual upon open­ing the gar­age door; oth­er­wise he wouldn’t make ad­vance ex­cuses. It may also be that the claimant’s pool of pro­pos­i­tional be­liefs con­tains There is a dragon in my gar­age. It may seem, to a ra­tion­al­ist, that these two be­liefs should col­lide and con­flict even though they are of dif­fer­ent types. Yet it is a phys­ical fact that you can write “The sky is green!” next to a pic­ture of a blue sky without the pa­per burst­ing into flames.

The ra­tion­al­ist vir­tue of em­pir­i­cism is sup­posed to pre­vent us from this class of mis­take. We’re sup­posed to con­stantly ask our be­liefs which ex­per­i­ences they pre­dict, make them pay rent in an­ti­cip­a­tion. But the dragon-claimant’s prob­lem runs deeper, and can­not be cured with such simple ad­vice. It’s not ex­actly dif­fi­cult to con­nect be­lief in a dragon to an­ti­cip­ated ex­per­i­ence of the gar­age. If you be­lieve there’s a dragon in your gar­age, then you can ex­pect to open up the door and see a dragon. If you don’t see a dragon, then that means there’s no dragon in your gar­age. This is pretty straight­for­ward. You can even try it with your own gar­age.

No, this in­vis­ib­il­ity busi­ness is a symp­tom of some­thing much worse.

Depend­ing on how your child­hood went, you may re­mem­ber a time period when you first began to doubt Santa Claus’s ex­ist­ence, but you still be­lieved that you were sup­posed to be­lieve in Santa Claus, so you tried to deny the doubts. As Daniel Den­nett ob­serves, where it is dif­fi­cult to be­lieve a thing, it is of­ten much easier to be­lieve that you ought to be­lieve it. What does it mean to be­lieve that the Ul­timate Cos­mic Sky is both per­fectly blue and per­fectly green? The state­ment is con­fus­ing; it’s not even clear what it would mean to be­lieve it—what ex­actly would be be­lieved, if you be­lieved. You can much more eas­ily be­lieve that it is proper, that it is good and vir­tu­ous and be­ne­fi­cial, to be­lieve that the Ul­timate Cos­mic Sky is both per­fectly blue and per­fectly green. Den­nett calls this “be­lief in be­lief”.

And here things be­come com­plic­ated, as hu­man minds are wont to do—I think even Den­nett over­sim­pli­fies how this psy­cho­logy works in prac­tice. For one thing, if you be­lieve in be­lief, you can­not ad­mit to your­self that you only be­lieve in be­lief, be­cause it is vir­tu­ous to be­lieve, not to be­lieve in be­lief, and so if you only be­lieve in be­lief, in­stead of be­liev­ing, you are not vir­tu­ous. Nobody will ad­mit to them­selves, “I don’t be­lieve the Ul­timate Cos­mic Sky is blue and green, but I be­lieve I ought to be­lieve it”—not un­less they are un­usu­ally cap­able of ac­know­ledging their own lack of vir­tue. People don’t be­lieve in be­lief in be­lief, they just be­lieve in be­lief.

(Those who find this con­fus­ing may find it help­ful to study math­em­at­ical lo­gic, which trains one to make very sharp dis­tinc­tions between the pro­pos­i­tion P, a proof of P, and a proof that P is prov­able. There are sim­il­arly sharp dis­tinc­tions between P, want­ing P, be­liev­ing P, want­ing to be­lieve P, and be­liev­ing that you be­lieve P.)

There’s dif­fer­ent kinds of be­lief in be­lief. You may be­lieve in be­lief ex­pli­citly; you may re­cite in your de­lib­er­ate stream of con­scious­ness the verbal sen­tence “It is vir­tu­ous to be­lieve that the Ul­timate Cos­mic Sky is per­fectly blue and per­fectly green.” (While also be­liev­ing that you be­lieve this, un­less you are un­usu­ally cap­able of ac­know­ledging your own lack of vir­tue.) But there’s also less ex­pli­cit forms of be­lief in be­lief. Maybe the dragon-claimant fears the pub­lic ri­dicule that he ima­gines will res­ult if he pub­licly con­fesses he was wrong (al­though, in fact, a ra­tion­al­ist would con­grat­u­late him, and oth­ers are more likely to ri­dicule him if he goes on claim­ing there’s a dragon in his gar­age). Maybe the dragon-claimant flinches away from the pro­spect of ad­mit­ting to him­self that there is no dragon, be­cause it con­flicts with his self-im­age as the glor­i­ous dis­coverer of the dragon, who saw in his gar­age what all oth­ers had failed to see.

If all our thoughts were de­lib­er­ate verbal sen­tences like philo­soph­ers ma­nip­u­late, the hu­man mind would be a great deal easier for hu­mans to un­der­stand. Fleet­ing men­tal im­ages, un­spoken flinches, de­sires ac­ted upon without ac­know­ledge­ment—these ac­count for as much of ourselves as words.

While I dis­agree with Den­nett on some de­tails and com­plic­a­tions, I still think that Den­nett’s no­tion of be­lief in be­lief is the key in­sight ne­ces­sary to un­der­stand the dragon-claimant. But we need a wider concept of be­lief, not lim­ited to verbal sen­tences. “Be­lief” should in­clude un­spoken an­ti­cip­a­tion-con­trol­lers. “Be­lief in be­lief” should in­clude un­spoken cog­nit­ive-be­ha­vior-guiders. It is not psy­cho­lo­gic­ally real­istic to say “The dragon-claimant does not be­lieve there is a dragon in his gar­age; he be­lieves it is be­ne­fi­cial to be­lieve there is a dragon in his gar­age.” But it is real­istic to say the dragon-claimant an­ti­cip­ates as if there is no dragon in his gar­age, and makes ex­cuses as if he be­lieved in the be­lief.

You can pos­sess an or­din­ary men­tal pic­ture of your gar­age, with no dragons in it, which cor­rectly pre­dicts your ex­per­i­ences on open­ing the door, and never once think the verbal phrase There is no dragon in my gar­age. I even bet it’s happened to you—that when you open your gar­age door or bed­room door or whatever, and ex­pect to see no dragons, no such verbal phrase runs through your mind.

And to flinch away from giv­ing up your be­lief in the dragon—or flinch away from giv­ing up your self-im­age as a per­son who be­lieves in the dragon—it is not ne­ces­sary to ex­pli­citly think I want to be­lieve there’s a dragon in my gar­age. It is only ne­ces­sary to flinch away from the pro­spect of ad­mit­ting you don’t be­lieve.

To cor­rectly an­ti­cip­ate, in ad­vance, which ex­per­i­mental res­ults shall need to be ex­cused, the dragon-claimant must (a) pos­sess an ac­cur­ate an­ti­cip­a­tion-con­trolling model some­where in his mind, and (b) act cog­nit­ively to pro­tect either (b1) his free-float­ing pro­pos­i­tional be­lief in the dragon or (b2) his self-im­age of be­liev­ing in the dragon.

If someone be­lieves in their be­lief in the dragon, and also be­lieves in the dragon, the prob­lem is much less severe. They will be will­ing to stick their neck out on ex­per­i­mental pre­dic­tions, and per­haps even agree to give up the be­lief if the ex­per­i­mental pre­dic­tion is wrong—al­though be­lief in be­lief can still in­ter­fere with this, if the be­lief it­self is not ab­so­lutely con­fid­ent. When someone makes up ex­cuses in ad­vance, it would seem to re­quire that be­lief, and be­lief in be­lief, have be­come un­syn­chron­ized.