Suppose there is a fair coin that is going to flipped, and I have been told that it is biased towards heads, so I bet on heads. Suppose that I am then informed that it is in fact biased in a random direction: all of a sudden I should reconsider whether I think betting on heads is the best strategy. I might not decide to switch to tails (cost of switching, and anyway I had some evidence that heads was the direction of bias even if it later turned out to be less-than-totally-informative), but I will move the estimate of my success a lot closer to 50%.
I seem to be arguing that when there’s a lot of uncertainty about the model I should assume any given P and not-P are equally likely, because this seems like the best ignorance prior for a binary event about which I have very little information. When one learns there is a lot of structural/metaphysical uncertainty around the universe, identity, et cetera, one should revise their probabilities of any given obviously relevant P/not-P pair towards 50% each, and note that they would not be too surprised by any result being true (as they’re expecting anything of everything to happen).
This is a try at resolving my own confusion:
Suppose there is a fair coin that is going to flipped, and I have been told that it is biased towards heads, so I bet on heads. Suppose that I am then informed that it is in fact biased in a random direction: all of a sudden I should reconsider whether I think betting on heads is the best strategy. I might not decide to switch to tails (cost of switching, and anyway I had some evidence that heads was the direction of bias even if it later turned out to be less-than-totally-informative), but I will move the estimate of my success a lot closer to 50%.
I seem to be arguing that when there’s a lot of uncertainty about the model I should assume any given P and not-P are equally likely, because this seems like the best ignorance prior for a binary event about which I have very little information. When one learns there is a lot of structural/metaphysical uncertainty around the universe, identity, et cetera, one should revise their probabilities of any given obviously relevant P/not-P pair towards 50% each, and note that they would not be too surprised by any result being true (as they’re expecting anything of everything to happen).