Making Beliefs Pay Rent (in Anticipated Experiences)

Thus be­gins the an­cient parable:

If a tree falls in a for­est and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibra­tions in the air.” Another says, “No it does not, for there is no au­di­tory pro­cess­ing in any brain.”

If there’s a foun­da­tional skill in the mar­tial art of ra­tio­nal­ity, a men­tal stance on which all other tech­nique rests, it might be this one: the abil­ity to spot, in­side your own head, psy­cholog­i­cal signs that you have a men­tal map of some­thing, and signs that you don’t.

Sup­pose that, af­ter a tree falls, the two ar­guers walk into the for­est to­gether. Will one ex­pect to see the tree fallen to the right, and the other ex­pect to see the tree fallen to the left? Sup­pose that be­fore the tree falls, the two leave a sound recorder next to the tree. Would one, play­ing back the recorder, ex­pect to hear some­thing differ­ent from the other? Sup­pose they at­tach an elec­troen­cephalo­graph to any brain in the world; would one ex­pect to see a differ­ent trace than the other?

Though the two ar­gue, one say­ing “No,” and the other say­ing “Yes,” they do not an­ti­ci­pate any differ­ent ex­pe­riences. The two think they have differ­ent mod­els of the world, but they have no differ­ence with re­spect to what they ex­pect will hap­pen to them; their maps of the world do not di­verge in any sen­sory de­tail.

It’s tempt­ing to try to elimi­nate this mis­take class by in­sist­ing that the only le­gi­t­i­mate kind of be­lief is an an­ti­ci­pa­tion of sen­sory ex­pe­rience. But the world does, in fact, con­tain much that is not sensed di­rectly. We don’t see the atoms un­der­ly­ing the brick, but the atoms are in fact there. There is a floor be­neath your feet, but you don’t ex­pe­rience the floor di­rectly; you see the light re­flected from the floor, or rather, you see what your retina and vi­sual cor­tex have pro­cessed of that light. To in­fer the floor from see­ing the floor is to step back into the un­seen causes of ex­pe­rience. It may seem like a very short and di­rect step, but it is still a step.

You stand on top of a tall build­ing, next to a grand­father clock with an hour, minute, and tick­ing sec­ond hand. In your hand is a bowl­ing ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowl­ing ball hit­ting the ground?

To an­swer pre­cisely, you must use be­liefs like Earth’s grav­ity is 9.8 me­ters per sec­ond per sec­ond, and This build­ing is around 120 me­ters tall. Th­ese be­liefs are not word­less an­ti­ci­pa­tions of a sen­sory ex­pe­rience; they are ver­bal-ish, propo­si­tional. It prob­a­bly does not ex­ag­ger­ate much to de­scribe these two be­liefs as sen­tences made out of words. But these two be­liefs have an in­fer­en­tial con­se­quence that is a di­rect sen­sory an­ti­ci­pa­tion—if the clock’s sec­ond hand is on the 12 nu­meral when you drop the ball, you an­ti­ci­pate see­ing it on the 1 nu­meral when you hear the crash five sec­onds later. To an­ti­ci­pate sen­sory ex­pe­riences as pre­cisely as pos­si­ble, we must pro­cess be­liefs that are not an­ti­ci­pa­tions of sen­sory ex­pe­rience.

It is a great strength of Homo sapi­ens that we can, bet­ter than any other species in the world, learn to model the un­seen. It is also one of our great weak points. Hu­mans of­ten be­lieve in things that are not only un­seen but un­real.

The same brain that builds a net­work of in­ferred causes be­hind sen­sory ex­pe­rience can also build a net­work of causes that is not con­nected to sen­sory ex­pe­rience, or poorly con­nected. Al­chemists be­lieved that phlo­gis­ton caused fire—we could sim­plis­ti­cally model their minds by draw­ing a lit­tle node la­beled “Phlo­gis­ton,” and an ar­row from this node to their sen­sory ex­pe­rience of a crack­ling campfire—but this be­lief yielded no ad­vance pre­dic­tions; the link from phlo­gis­ton to ex­pe­rience was always con­figured af­ter the ex­pe­rience, rather than con­strain­ing the ex­pe­rience in ad­vance.

Or sup­pose your English pro­fes­sor teaches you that the fa­mous writer Wulky Wilk­insen is ac­tu­ally a “retropo­si­tional au­thor,” which you can tell be­cause his books ex­hibit “alienated re­sub­li­ma­tion.” And per­haps your pro­fes­sor knows all this be­cause their pro­fes­sor told them; but all theyre able to say about re­sub­li­ma­tion is that its char­ac­ter­is­tic of retropo­si­tional thought, and of retropo­si­tion­al­ity that its marked by alienated re­sub­li­ma­tion. What does this mean you should ex­pect from Wulky Wilk­insen’s books?

Noth­ing. The be­lief, if you can call it that, doesn’t con­nect to sen­sory ex­pe­rience at all. But you had bet­ter re­mem­ber the propo­si­tional as­ser­tions that “Wulky Wilk­insen” has the “retropo­si­tion­al­ity” at­tribute and also the “alienated re­sub­li­ma­tion” at­tribute, so you can re­gur­gi­tate them on the up­com­ing quiz. The two be­liefs are con­nected to each other, though still not con­nected to any an­ti­ci­pated ex­pe­rience.

We can build up whole net­works of be­liefs that are con­nected only to each other—call these “float­ing” be­liefs. It is a uniquely hu­man flaw among an­i­mal species, a per­ver­sion of Homo sapi­ens ’s abil­ity to build more gen­eral and flex­ible be­lief net­works.

The ra­tio­nal­ist virtue of em­piri­cism con­sists of con­stantly ask­ing which ex­pe­riences our be­liefs pre­dict—or bet­ter yet, pro­hibit. Do you be­lieve that phlo­gis­ton is the cause of fire? Then what do you ex­pect to see hap­pen, be­cause of that? Do you be­lieve that Wulky Wilk­insen is a retropo­si­tional au­thor? Then what do you ex­pect to see be­cause of that? No, not “alienated re­sub­li­ma­tion”; what ex­pe­rience will hap­pen to you? Do you be­lieve that if a tree falls in the for­est, and no one hears it, it still makes a sound? Then what ex­pe­rience must there­fore be­fall you?

It is even bet­ter to ask: what ex­pe­rience must not hap­pen to you? Do you be­lieve that Élan vi­tal ex­plains the mys­te­ri­ous al­ive­ness of liv­ing be­ings? Then what does this be­lief not al­low to hap­pen—what would definitely falsify this be­lief? A null an­swer means that your be­lief does not con­strain ex­pe­rience; it per­mits any­thing to hap­pen to you. It floats.

When you ar­gue a seem­ingly fac­tual ques­tion, always keep in mind which differ­ence of an­ti­ci­pa­tion you are ar­gu­ing about. If you can’t find the differ­ence of an­ti­ci­pa­tion, you’re prob­a­bly ar­gu­ing about la­bels in your be­lief net­work—or even worse, float­ing be­liefs, bar­na­cles on your net­work. If you don’t know what ex­pe­riences are im­plied by Wulky Wilk­insens writ­ing be­ing retropo­si­tional, you can go on ar­gu­ing for­ever.

Above all, don’t ask what to be­lieve—ask what to an­ti­ci­pate. Every ques­tion of be­lief should flow from a ques­tion of an­ti­ci­pa­tion, and that ques­tion of an­ti­ci­pa­tion should be the cen­ter of the in­quiry. Every guess of be­lief should be­gin by flow­ing to a spe­cific guess of an­ti­ci­pa­tion, and should con­tinue to pay rent in fu­ture an­ti­ci­pa­tions. If a be­lief turns dead­beat, evict it.