Justifying Induction

Re­lated to: Where Re­cur­sive Jus­tifi­ca­tion Hits Bot­tom, Pri­ors as Math­e­mat­i­cal Ob­jects, Prob­a­bil­ity is Sub­jec­tively Ob­jec­tive

Fol­low up to: A Proof of Oc­cam’s Razor

In my post on Oc­cam’s Ra­zor, I showed that a cer­tain weak form of the Ra­zor fol­lows nec­es­sar­ily from stan­dard math­e­mat­ics and prob­a­bil­ity the­ory. Nat­u­rally, the Ra­zor as used in prac­tice is stronger and more con­crete, and can­not be proven to be nec­es­sar­ily true. So rather than at­tempt­ing to give a nec­es­sary proof, I pointed out that we learn by in­duc­tion what con­crete form the Ra­zor should take.

But what jus­tifies in­duc­tion? Like the Ra­zor, some as­pects of it fol­low nec­es­sar­ily from stan­dard prob­a­bil­ity the­ory, while other as­pects do not.

Sup­pose we con­sider the state­ment S, “The sun will rise ev­ery day for the next 10,000 days,” as­sign­ing it a prob­a­bil­ity p, be­tween 0 and 1. Then sup­pose we are given ev­i­dence E, namely that the sun rises to­mor­row. What is our up­dated prob­a­bil­ity for S? Ac­cord­ing to Bayes’ the­o­rem, our new prob­a­bil­ity will be:

P(S|E) = P(E|S)P(S)/​P(E) = p/​P(E), be­cause given that the sun will rise ev­ery day for the next 10,000 days, it will cer­tainly rise to­mor­row. So our new prob­a­bil­ity is greater than p. So this seems to jus­tify in­duc­tion, show­ing it to work of ne­ces­sity. But does it? In the same way we could ar­gue that the prob­a­bil­ity that “ev­ery hu­man be­ing is less than 10 feet tall” must in­crease ev­ery time we see an­other hu­man be­ing less than 10 feet tall, since the prob­a­bil­ity of this ev­i­dence (“the next hu­man be­ing I see will be less than 10 feet tall”), given the hy­poth­e­sis, is also 1. On the other hand, if we come upon a hu­man be­ing 9 feet 11 inches tall, our sub­jec­tive prob­a­bil­ity that there is a 10 foot tall hu­man be­ing will in­crease, not de­crease. So is there some­thing wrong with the math here? Or with our in­tu­itions?

In fact, the prob­lem is nei­ther with the math nor with the in­tu­ition. Given that ev­ery hu­man be­ing is less than 10 feet tall, the prob­a­bil­ity that “the next hu­man be­ing I see will be less than 10 feet tall” is in­deed 1, but the prob­a­bil­ity that “there is a hu­man be­ing 9 feet 11 inches tall” is definitely not 1. So the math up­dates on a sin­gle as­pect of our ev­i­dence, while our in­tu­ition is tak­ing more of the ev­i­dence into ac­count.

But this math seems to work be­cause we are try­ing to in­duce a uni­ver­sal which in­cludes the ev­i­dence. Sup­pose in­stead we try to go from one par­tic­u­lar to an­other: I see a black crow to­day. Does it be­come more prob­a­ble that a crow I see to­mor­row will also be black? We know from the above rea­son­ing that it be­comes more prob­a­ble that all crows are black, and one might sup­pose that it there­fore fol­lows that it is more prob­a­ble that the next crow I see will be black. But this does not fol­low. The prob­a­bil­ity of “I see a black crow to­day”, given that “I see a black crow to­mor­row,” is cer­tainly not 1, and so the prob­a­bil­ity of see­ing a black crow to­mor­row, given that I see one to­day, may in­crease or de­crease de­pend­ing on our prior – no nec­es­sary con­clu­sion can be drawn. Eliezer points this out in the ar­ti­cle Where Re­cur­sive Jus­tifi­ca­tion Hits Bot­tom.

On the other hand, we would not want to draw a con­clu­sion of that sort: even in prac­tice we don’t always up­date in the same di­rec­tion in such cases. If we know there is only one white mar­ble in a bucket, and many black ones, then when we draw the white mar­ble, we be­come very sure the next draw will not be white. Note how­ever that this de­pends on know­ing some­thing about the con­tents of the bucket, namely that there is only one white mar­ble. If we are com­pletely ig­no­rant about the con­tents of the bucket, then we form uni­ver­sal hy­pothe­ses about the con­tents based on the draws we have seen. And such hy­pothe­ses do in­deed in­crease in prob­a­bil­ity when they are con­firmed, as was shown above.