Bayesian probability as an approximate theory of uncertainty?

Many people believe that Bayesian probability is an exact theory of uncertainty, and other theories are imperfect approximations. In this post I’d like to tentatively argue the opposite: that Bayesian probability is an imperfect approximation of what we want from a theory of uncertainty. This post won’t contain any new results, and is probably very confused anyway.

I agree that Bayesian probability is provably the only correct theory for dealing with a certain idealized kind of uncertainty. But what kinds of uncertainty actually exist in our world, and how closely do they agree with what’s needed for Bayesianism to work?

In a Tegmark Level IV world (thanks to pragmatist for pointing out this assumption), uncertainty seems to be either indexical or logical. When I flip a coin, the information in my mind is either enough or not enough to determine the outcome in advance. If I have enough information—if it’s mathematically possible to determine which way the coin will fall, given the bits of information that I have received—then I have logical uncertainty, which is no different in principle from being uncertain about the trillionth digit of pi. On the other hand, if I don’t have enough information even given infinite mathematical power, it implies that the world must contain copies of me that will see different coinflip outcomes (if there was just one copy, mathematics would be able to pin it down), so I have indexical uncertainty.

The trouble is that both indexical and logical uncertainty are puzzling in their own ways.

With indexical uncertainty, the usual example that breaks probabilistic reasoning is the Absent-Minded Driver problem. When the probability of you being this or that copy depends on the decision you’re about to make, these probabilities are unusable for decision-making. Since Bayesian probability is in large part justified by decision-making, we’re in trouble. And the AMD is not an isolated problem. In many imaginable situations faced by humans or idealized agents, there’s a nonzero chance of returning to the same state of mind in the future, and that chance slightly depends on the current action. To the extent that’s true, Bayesian probability is an imperfect approximation.

With logical uncertainty, the situation is even worse. We don’t have a good theory of how logical uncertainty should work. (Though there have been several attempts, like Benja and Paul’s prior, Manfred’s prior, or my own recent attempt.) Since Bayesian probability is in large part justified by having perfect agreement with logic, it seems likely that the correct theory of logical uncertainty won’t look very Bayesian, because the whole point is to have limited computational resources and only approximate agreement with logic.

Another troubling point is that if Bayesian probability is suspect, the idea of “priors” becomes suspect by association. Our best ideas for decision-making under indexical uncertainty (UDT) and logical uncertainty (priors over theories) involve some kind of priors, or more generally probability distributions, so we might want to reexamine those as well. Though if we interpret a UDT-ish prior as a measure of care rather than belief, maybe the problem goes away...