Bayesian Probability is for things that are Space-like Separated from You

First, I should ex­plain what I mean by space-like sep­a­rated from you. Imag­ine a world that looks like a Bayesian net­work, and imag­ine that you are a node in that Bayesian net­work. If there is a path from you to an­other node fol­low­ing edges in the net­work, I will say that node is time-like sep­a­rated from you, and in your fu­ture. If there is a path from an­other node to you, I will say that node is time-like sep­a­rated from you, and in your past. Other­wise, I will say that the node is space-like sep­a­rated from you.

Nodes in your past can be thought of as things that you ob­serve. When you think about physics, it sure does seem like there are a lot of things in your past that you do not ob­serve, but I am not think­ing about physics-time, I am think­ing about log­i­cal-time. If some­thing is in your past, but has no effect on what al­gorithm you are run­ning on what ob­ser­va­tions you get, then it might as well be con­sid­ered as space-like sep­a­rated from you. If you com­pute how ev­ery­thing in the uni­verse eval­u­ates, the space-like sep­a­rated things are the things that can be eval­u­ated ei­ther be­fore or af­ter you, since their out­put does not change yours or vice-versa. If you par­tially ob­serve a fact, then I want to say you can de­com­pose that fact into the part that you ob­served and the part that you didn’t, and say that the part you ob­served is in your past, while the part you didn’t ob­serve is space-like sep­a­rated from you. (Whether or not you ac­tu­ally can de­com­pose things like this is com­pli­cated, and re­lated to whether or not you can use the tickle defense is the smok­ing le­sion prob­lem.)

Nodes in your fu­ture can be thought of as things that you con­trol. Th­ese are not always things that you want to con­trol. For ex­am­ple, you con­trol the out­put of “You as­sign prob­a­bil­ity less than 12 to this sen­tence,” but per­haps you wish you didn’t. Again, if you par­tially con­trol a fact, I want to say that (maybe) you can break that fact into mul­ti­ple nodes, some of which you con­trol, and some of which you don’t.

So, you know the things in your past, so there is no need for prob­a­bil­ity there. You don’t know the things in your fu­ture, or things that are space-like sep­a­rated from you. (Maybe. I’m not sure that talk­ing about know­ing things you con­trol is not just a type er­ror.) You may have cached that you should use Bayesian prob­a­bil­ity to deal with things you are un­cer­tain about. You may have this jus­tified by the fact that if you don’t use Bayesian prob­a­bil­ity, there is a Pareto im­prove­ment that will cause you to pre­dict bet­ter in all wor­lds. The prob­lem is that the stan­dard jus­tifi­ca­tions of Bayesian prob­a­bil­ity are in a frame­work where the facts that you are un­cer­tain about are not in any way af­fected by whether or not you be­lieve them! There­fore, our rea­sons for lik­ing Bayesian prob­a­bil­ity do not ap­ply to our un­cer­tainty about the things that are in our fu­ture! Note that many things in our fu­ture (like our fu­ture ob­ser­va­tions) are also in the fu­ture of things that are space-like sep­a­rated from us, so we want to use Bayes to rea­son about those things in or­der to have bet­ter be­liefs about our ob­ser­va­tions.

I claim that log­i­cal in­duc­tors do not feel en­tirely Bayesian, and this might be why. They can’t if they are able to think about sen­tences like “You as­sign prob­a­bil­ity less than 12 to this sen­tence.”