Bayesian Prob­ab­il­ity is for things that are Space-like Se­par­ated from You

First, I should ex­plain what I mean by space-like sep­ar­ated from you. Ima­gine a world that looks like a Bayesian net­work, and ima­gine that you are a node in that Bayesian net­work. If there is a path from you to an­other node fol­low­ing edges in the net­work, I will say that node is time-like sep­ar­ated from you, and in your fu­ture. If there is a path from an­other node to you, I will say that node is time-like sep­ar­ated from you, and in your past. Other­wise, I will say that the node is space-like sep­ar­ated from you.

Nodes in your past can be thought of as things that you ob­serve. When you think about phys­ics, it sure does seem like there are a lot of things in your past that you do not ob­serve, but I am not think­ing about phys­ics-time, I am think­ing about lo­gical-time. If some­thing is in your past, but has no ef­fect on what al­gorithm you are run­ning on what ob­ser­va­tions you get, then it might as well be con­sidered as space-like sep­ar­ated from you. If you com­pute how everything in the uni­verse eval­u­ates, the space-like sep­ar­ated things are the things that can be eval­u­ated either be­fore or after you, since their out­put does not change yours or vice-versa. If you par­tially ob­serve a fact, then I want to say you can de­com­pose that fact into the part that you ob­served and the part that you didn’t, and say that the part you ob­served is in your past, while the part you didn’t ob­serve is space-like sep­ar­ated from you. (Whether or not you ac­tu­ally can de­com­pose things like this is com­plic­ated, and re­lated to whether or not you can use the tickle de­fense is the smoking le­sion prob­lem.)

Nodes in your fu­ture can be thought of as things that you con­trol. These are not al­ways things that you want to con­trol. For ex­ample, you con­trol the out­put of “You as­sign prob­ab­il­ity less than 12 to this sen­tence,” but per­haps you wish you didn’t. Again, if you par­tially con­trol a fact, I want to say that (maybe) you can break that fact into mul­tiple nodes, some of which you con­trol, and some of which you don’t.

So, you know the things in your past, so there is no need for prob­ab­il­ity there. You don’t know the things in your fu­ture, or things that are space-like sep­ar­ated from you. (Maybe. I’m not sure that talk­ing about know­ing things you con­trol is not just a type er­ror.) You may have cached that you should use Bayesian prob­ab­il­ity to deal with things you are un­cer­tain about. You may have this jus­ti­fied by the fact that if you don’t use Bayesian prob­ab­il­ity, there is a Pareto im­prove­ment that will cause you to pre­dict bet­ter in all worlds. The prob­lem is that the stand­ard jus­ti­fic­a­tions of Bayesian prob­ab­il­ity are in a frame­work where the facts that you are un­cer­tain about are not in any way af­fected by whether or not you be­lieve them! There­fore, our reas­ons for lik­ing Bayesian prob­ab­il­ity do not ap­ply to our un­cer­tainty about the things that are in our fu­ture! Note that many things in our fu­ture (like our fu­ture ob­ser­va­tions) are also in the fu­ture of things that are space-like sep­ar­ated from us, so we want to use Bayes to reason about those things in or­der to have bet­ter be­liefs about our ob­ser­va­tions.

I claim that lo­gical in­duct­ors do not feel en­tirely Bayesian, and this might be why. They can’t if they are able to think about sen­tences like “You as­sign prob­ab­il­ity less than 12 to this sen­tence.”