Worst Case Bayes: Open Problems

Here, I give a list of open problems related to the Worst Case Bayes prior, , and provide some analysis.

1. Is computably approximable?

I would not be surpirsed if there is an easy proof that I missed because I was focused on showing that the following specific procedure computably approximates .

Start with assigning probability 12 to each sentence. Take a theorem prover which repeatedly outputs theorems of the form “Exactly one sentence in [finite list of sentences] is true.” For each output of the theorem prover, observe the probabilities assigned to those sentences. If they do not sum to one, shift them so that they sum to one in the unique way that preserves the fact that Bayes score is independent of the model.

2. Does the above procedure converge to ?

3. Does the above procedure even converge?

4. Does the above procedure even converge to something whose Bayes score is independent of the model?

We probably need to make some assumptions about the theorem prover to get any of these.

5. Does have the property that Bayes score is independent of the model?

I believe the answer to all of these questions is yes, assuming the theorem prover has good properties.

Clearly, 2 implies 1 and 3, 4 implies 3, and 4 together with 2 imply 5. What may be less obvious is that 4 actually implies 2, which means that a positive answer to question 4 would answer all of the other questions on the list.

To see that 4 implies 2, observe that if the procedure converges, the limit must be coherent, because coherence is equivalent to the property that whenever you have a sentence of the form “Exactly one sentence in [finite list of sentences] is true,” the probabilities in that list sum to 1. This is assuming that the theorem prover outputs every provable theorem of the correct form infinitely many times.

Since the procedure converges to a coherent prior, we showed in the previous post, there is no way to increase the Bayes score by any in all models simultaneously. Therefore there is no probability assignement which has worst case Bayes score greater than the Bayes score that the limit of the procedure gets in all models. Therefore the limit of the procedure maximizes worst case Bayes score, and by uniqueness equals .

At first, 4 may seem like it should be obvious. Every step along the way, the Bayes score is independent of the model, and this Bayes score is increasing and bounded. Since the Bayes score is increasing and bounded, it must converge. It seems then that the individual probabilities should also converge, and the limit should also have Bayes score independent of the model. While this is probably true, it is not a proof, and weird stuff could in principle happen by pushing problems further and further into the tail of complicated sentences.

No comments.