I’m not convinced that there’s a meaningful difference between prior distributions and prior probabilities.
There isn’t when you have only two competing hypotheses. Add a third hypothesis and
you really do have to work with distributions. Chapter 4 of Jaynes explains this wonderfully. It is a long chapter, but fully worth the effort.
But the issue is also nicely captured by your own analysis. As you show, any possible linear combination of the two hypotheses can be characterized by a single parameter, which is itself the probability that the next ball will be white. But when you have three hypotheses, you have two degrees of freedom. A single probability number no longer captures all there is to be said about what you know.
There isn’t when you have only two competing hypotheses. Add a third hypothesis and you really do have to work with distributions. Chapter 4 of Jaynes explains this wonderfully. It is a long chapter, but fully worth the effort.
But the issue is also nicely captured by your own analysis. As you show, any possible linear combination of the two hypotheses can be characterized by a single parameter, which is itself the probability that the next ball will be white. But when you have three hypotheses, you have two degrees of freedom. A single probability number no longer captures all there is to be said about what you know.
In retrospect, it’s obvious that “probability” should refer to a real scalar on the interval [0,1].