A Priori

Tra­di­tional Ra­tion­al­ity is phrased as so­cial rules, with vi­o­la­tions in­ter­pretable as cheat­ing: if you break the rules and no one else is do­ing so, you’re the first to defect—mak­ing you a bad, bad per­son. To Bayesi­ans, the brain is an en­g­ine of ac­cu­racy: if you vi­o­late the laws of ra­tio­nal­ity, the en­g­ine doesn’t run, and this is equally true whether any­one else breaks the rules or not.

Con­sider the prob­lem of Oc­cam’s Ra­zor, as con­fronted by Tra­di­tional philoso­phers. If two hy­pothe­ses fit the same ob­ser­va­tions equally well, why be­lieve the sim­pler one is more likely to be true?

You could ar­gue that Oc­cam’s Ra­zor has worked in the past, and is there­fore likely to con­tinue to work in the fu­ture. But this, it­self, ap­peals to a pre­dic­tion from Oc­cam’s Ra­zor. “Oc­cam’s Ra­zor works up to Oc­to­ber 8th, 2007 and then stops work­ing there­after” is more com­plex, but it fits the ob­served ev­i­dence equally well.

You could ar­gue that Oc­cam’s Ra­zor is a rea­son­able dis­tri­bu­tion on prior prob­a­bil­ities. But what is a “rea­son­able” dis­tri­bu­tion? Why not la­bel “rea­son­able” a very com­pli­cated prior dis­tri­bu­tion, which makes Oc­cam’s Ra­zor work in all ob­served tests so far, but gen­er­ates ex­cep­tions in fu­ture cases?

In­deed, it seems there is no way to jus­tify Oc­cam’s Ra­zor ex­cept by ap­peal­ing to Oc­cam’s Ra­zor, mak­ing this ar­gu­ment un­likely to con­vince any judge who does not already ac­cept Oc­cam’s Ra­zor. (What’s spe­cial about the words I ital­i­cized?)

If you are a philoso­pher whose daily work is to write pa­pers, crit­i­cize other peo­ple’s pa­pers, and re­spond to oth­ers’ crit­i­cisms of your own pa­pers, then you may look at Oc­cam’s Ra­zor and shrug. Here is an end to jus­tify­ing, ar­gu­ing and con­vinc­ing. You de­cide to call a truce on writ­ing pa­pers; if your fel­low philoso­phers do not de­mand jus­tifi­ca­tion for your un-ar­guable be­liefs, you will not de­mand jus­tifi­ca­tion for theirs. And as the sym­bol of your treaty, your white flag, you use the phrase “a pri­ori truth”.

But to a Bayesian, in this era of cog­ni­tive sci­ence and evolu­tion­ary biol­ogy and Ar­tifi­cial In­tel­li­gence, say­ing “a pri­ori” doesn’t ex­plain why the brain-en­g­ine runs. If the brain has an amaz­ing “a pri­ori truth fac­tory” that works to pro­duce ac­cu­rate be­liefs, it makes you won­der why a thirsty hunter-gath­erer can’t use the “a pri­ori truth fac­tory” to lo­cate drink­able wa­ter. It makes you won­der why eyes evolved in the first place, if there are ways to pro­duce ac­cu­rate be­liefs with­out look­ing at things.

James R. New­man said: “The fact that one ap­ple added to one ap­ple in­vari­ably gives two ap­ples helps in the teach­ing of ar­ith­metic, but has no bear­ing on the truth of the propo­si­tion that 1 + 1 = 2.” The In­ter­net En­cy­clo­pe­dia of Philos­o­phy defines “a pri­ori” propo­si­tions as those know­able in­de­pen­dently of ex­pe­rience. Wikipe­dia quotes Hume: Re­la­tions of ideas are “dis­cov­er­able by the mere op­er­a­tion of thought, with­out de­pen­dence on what is any­where ex­is­tent in the uni­verse.” You can see that 1 + 1 = 2 just by think­ing about it, with­out look­ing at ap­ples.

But in this era of neu­rol­ogy, one ought to be aware that thoughts are ex­is­tent in the uni­verse; they are iden­ti­cal to the op­er­a­tion of brains. Ma­te­rial brains, real in the uni­verse, com­posed of quarks in a sin­gle unified math­e­mat­i­cal physics whose laws draw no bor­der be­tween the in­side and out­side of your skull.

When you add 1 + 1 and get 2 by think­ing, these thoughts are them­selves em­bod­ied in flashes of neu­ral pat­terns. In prin­ci­ple, we could ob­serve, ex­pe­ri­en­tially, the ex­act same ma­te­rial events as they oc­curred within some­one else’s brain. It would re­quire some ad­vances in com­pu­ta­tional neu­ro­biol­ogy and brain-com­puter in­ter­fac­ing, but in prin­ci­ple, it could be done. You could see some­one else’s en­g­ine op­er­at­ing ma­te­ri­ally, through ma­te­rial chains of cause and effect, to com­pute by “pure thought” that 1 + 1 = 2. How is ob­serv­ing this pat­tern in some­one else’s brain any differ­ent, as a way of know­ing, from ob­serv­ing your own brain do­ing the same thing? When “pure thought” tells you that 1 + 1 = 2, “in­de­pen­dently of any ex­pe­rience or ob­ser­va­tion”, you are, in effect, ob­serv­ing your own brain as ev­i­dence.

If this seems coun­ter­in­tu­itive, try to see minds/​brains as en­g­ines—an en­g­ine that col­lides the neu­ral pat­tern for 1 and the neu­ral pat­tern for 1 and gets the neu­ral pat­tern for 2. If this en­g­ine works at all, then it should have the same out­put if it ob­serves (with eyes and retina) a similar brain-en­g­ine car­ry­ing out a similar col­li­sion, and copies into it­self the re­sult­ing pat­tern. In other words, for ev­ery form of a pri­ori knowl­edge ob­tained by “pure thought”, you are learn­ing ex­actly the same thing you would learn if you saw an out­side brain-en­g­ine car­ry­ing out the same pure flashes of neu­ral ac­ti­va­tion. The en­g­ines are equiv­a­lent, the bot­tom-line out­puts are equiv­a­lent, the be­lief-en­tan­gle­ments are the same.

There is noth­ing you can know “a pri­ori”, which you could not know with equal val­idity by ob­serv­ing the chem­i­cal re­lease of neu­ro­trans­mit­ters within some out­side brain. What do you think you are, dear reader?

This is why you can pre­dict the re­sult of adding 1 ap­ple and 1 ap­ple by imag­in­ing it first in your mind, or punch “3 x 4” into a calcu­la­tor to pre­dict the re­sult of imag­in­ing 4 rows with 3 ap­ples per row. You and the ap­ple ex­ist within a bound­ary-less unified phys­i­cal pro­cess, and one part may echo an­other.

Are the sort of neu­ral flashes that philoso­phers la­bel “a pri­ori be­liefs”, ar­bi­trary? Many AI al­gorithms func­tion bet­ter with “reg­u­lariza­tion” that bi­ases the solu­tion space to­ward sim­pler solu­tions. But the reg­u­larized al­gorithms are them­selves more com­plex; they con­tain an ex­tra line of code (or 1000 ex­tra lines) com­pared to un­reg­u­larized al­gorithms. The hu­man brain is bi­ased to­ward sim­plic­ity, and we think more effi­ciently thereby. If you press the Ig­nore but­ton at this point, you’re left with a com­plex brain that ex­ists for no rea­son and works for no rea­son. So don’t try to tell me that “a pri­ori” be­liefs are ar­bi­trary, be­cause they sure aren’t gen­er­ated by rol­ling ran­dom num­bers. (What does the ad­jec­tive “ar­bi­trary” mean, any­way?)

You can’t ex­cuse call­ing a propo­si­tion “a pri­ori” by point­ing out that other philoso­phers are hav­ing trou­ble jus­tify­ing their propo­si­tions. If a philoso­pher fails to ex­plain some­thing, this fact can­not sup­ply elec­tric­ity to a re­friger­a­tor, nor act as a mag­i­cal fac­tory for ac­cu­rate be­liefs. There’s no truce, no white flag, un­til you un­der­stand why the en­g­ine works.

If you clear your mind of jus­tifi­ca­tion, of ar­gu­ment, then it seems ob­vi­ous why Oc­cam’s Ra­zor works in prac­tice: we live in a sim­ple world, a low-en­tropy uni­verse in which there are short ex­pla­na­tions to be found. “But,” you cry, “why is the uni­verse it­self or­derly?” This I do not know, but it is what I see as the next mys­tery to be ex­plained. This is not the same ques­tion as “How do I ar­gue Oc­cam’s Ra­zor to a hy­po­thet­i­cal de­bater who has not already ac­cepted it?”

Per­haps you can­not ar­gue any­thing to a hy­po­thet­i­cal de­bater who has not ac­cepted Oc­cam’s Ra­zor, just as you can­not ar­gue any­thing to a rock. A mind needs a cer­tain amount of dy­namic struc­ture to be an ar­gu­ment-ac­cep­tor. If a mind doesn’t im­ple­ment Mo­dus Po­nens, it can ac­cept “A” and “A->B” all day long with­out ever pro­duc­ing “B”. How do you jus­tify Mo­dus Po­nens to a mind that hasn’t ac­cepted it? How do you ar­gue a rock into be­com­ing a mind?

Brains evolved from non-brainy mat­ter by nat­u­ral se­lec­tion; they were not jus­tified into ex­is­tence by ar­gu­ing with an ideal philos­o­phy stu­dent of perfect empti­ness. This does not make our judg­ments mean­ingless. A brain-en­g­ine can work cor­rectly, pro­duc­ing ac­cu­rate be­liefs, even if it was merely built—by hu­man hands or cu­mu­la­tive stochas­tic se­lec­tion pres­sures—rather than ar­gued into ex­is­tence. But to be satis­fied by this an­swer, one must see ra­tio­nal­ity in terms of en­g­ines, rather than ar­gu­ments.