A frequentist might say, “P1 = 0.5. P2 is either epsilon or 1-epsilon, we don’t know which. P3 is either 0 or 1, we don’t know which.”
A Bayesian might reply, “P1 = P2 = P3 = 0.5. By the way, there’s no such thing as a probability of exactly 0 or 1.”
Which is right? As with many such long-unresolved debates, the problem is that two different concepts are being labeled with the word ‘probability’. Let’s separate them and replace P with:
F = the fraction of possible worlds in which a statement is true. F can be exactly 0 or 1.
B = the Bayesian probability that a statement is true. B cannot be exactly 0 or 1.
Clearly there must be a relationship between the two concepts, or the confusion wouldn’t have arisen in the first place, and there is: apart from both obeying various laws of probability, in the case where we know F but don’t know which world we are in, B = F. That’s what’s going on in case 1. In the other cases, we know F != 0.5, but our ignorance of its actual value makes it reasonable to assign B = 0.5.
When does the difference matter?
Suppose I offer to bet my $200 the millionth digit of pi is odd, versus your $100 that it’s even. With B3 = 0.5, that looks like a good bet from your viewpoint. But you also know F3 = either 0 or 1. You can also infer that I wouldn’t have offered that bet unless I knew F3 = 1, from which inference you are likely to update your B3 to more than 2⁄3, and decline.
On a larger scale, suppose we search Mars thoroughly enough to be confident there is no life there. Now we know F2 = epsilon. Our Bayesian estimate of the probability of life on Europa will also decline toward 0.
Once we understand F and B are different functions, there is no contradiction.
Two probabilities
Consider the following statements:
1. The result of this coin flip is heads.
2. There is life on Mars.
3. The millionth digit of pi is odd.
What is the probability of each statement?
A frequentist might say, “P1 = 0.5. P2 is either epsilon or 1-epsilon, we don’t know which. P3 is either 0 or 1, we don’t know which.”
A Bayesian might reply, “P1 = P2 = P3 = 0.5. By the way, there’s no such thing as a probability of exactly 0 or 1.”
Which is right? As with many such long-unresolved debates, the problem is that two different concepts are being labeled with the word ‘probability’. Let’s separate them and replace P with:
F = the fraction of possible worlds in which a statement is true. F can be exactly 0 or 1.
B = the Bayesian probability that a statement is true. B cannot be exactly 0 or 1.
Clearly there must be a relationship between the two concepts, or the confusion wouldn’t have arisen in the first place, and there is: apart from both obeying various laws of probability, in the case where we know F but don’t know which world we are in, B = F. That’s what’s going on in case 1. In the other cases, we know F != 0.5, but our ignorance of its actual value makes it reasonable to assign B = 0.5.
When does the difference matter?
Suppose I offer to bet my $200 the millionth digit of pi is odd, versus your $100 that it’s even. With B3 = 0.5, that looks like a good bet from your viewpoint. But you also know F3 = either 0 or 1. You can also infer that I wouldn’t have offered that bet unless I knew F3 = 1, from which inference you are likely to update your B3 to more than 2⁄3, and decline.
On a larger scale, suppose we search Mars thoroughly enough to be confident there is no life there. Now we know F2 = epsilon. Our Bayesian estimate of the probability of life on Europa will also decline toward 0.
Once we understand F and B are different functions, there is no contradiction.