There are a couple of others, all of which seem to rely on being told probabilities. You assign P(A) = 0.5, and then get told that P(A) = 0.7.
It seems that either P(A) = 0.7 is someone else’s degree of belief, in which case Aumann comes into play, or a statement about what your degree of belief should be, given your evidence. But idealised Bayesian agents don’t make that sort of mistake!
Are there other examples that have the same bizarrely wrong results?
There are a couple of others, all of which seem to rely on being told probabilities. You assign P(A) = 0.5, and then get told that P(A) = 0.7.
It seems that either P(A) = 0.7 is someone else’s degree of belief, in which case Aumann comes into play, or a statement about what your degree of belief should be, given your evidence. But idealised Bayesian agents don’t make that sort of mistake!