Excuse me while I geek off on a tangent about Bayesian inference and Kolmogorov complexity.
Can one ever expect to be surprised?
If I generate a 100-digit random number, whatever number I get will have had a prior probability of 10^-100, but that does not mean I will be surprised by it. If it turned up all zeros I would be surprised, but that is most unlikely to happen. In this case, my surprise on seeing the generated number might theoretically be measured as the amount by which it can be compressed, the expectation value of which is surely zero.
But what is the expectation of my surprise at the answer to a question of which I cannot think of anything that could even be a possible explanation (including explanations that deny the question)? I don’t know how to measure that within the Bayes/Kolmogorov system of ideas.
You implicitly have hypotheses “this random number generator is broken always returns 0” and “this random number generator works fine.” You start off being pretty sure the latter is true. Your shift to the former upon seeing 0 is where the surprise comes from.
Right, and one thing to note is that given basic programming assumptions, a return of all zeros is a much more likely failure mode of a random number generator than say always returning 72 or something similar.
You can expect surprise, because “surprise” isn’t a linear function of the data.
Simplest example: assign a prior probability distribution to some uncertain real variable x. Your expected value E[x] is the average of this distribution. Your expected error E[E[x] - x] is indeed zero! But unless you already know the exact true value, your expected error magnitude E[abs(E[x] - x)] is always positive.
Excuse me while I geek off on a tangent about Bayesian inference and Kolmogorov complexity.
Can one ever expect to be surprised?
If I generate a 100-digit random number, whatever number I get will have had a prior probability of 10^-100, but that does not mean I will be surprised by it. If it turned up all zeros I would be surprised, but that is most unlikely to happen. In this case, my surprise on seeing the generated number might theoretically be measured as the amount by which it can be compressed, the expectation value of which is surely zero.
But what is the expectation of my surprise at the answer to a question of which I cannot think of anything that could even be a possible explanation (including explanations that deny the question)? I don’t know how to measure that within the Bayes/Kolmogorov system of ideas.
You implicitly have hypotheses “this random number generator is broken always returns 0” and “this random number generator works fine.” You start off being pretty sure the latter is true. Your shift to the former upon seeing 0 is where the surprise comes from.
Right, and one thing to note is that given basic programming assumptions, a return of all zeros is a much more likely failure mode of a random number generator than say always returning 72 or something similar.
This seems like the same conversation being had here.
You can expect surprise, because “surprise” isn’t a linear function of the data.
Simplest example: assign a prior probability distribution to some uncertain real variable x. Your expected value E[x] is the average of this distribution. Your expected error E[E[x] - x] is indeed zero! But unless you already know the exact true value, your expected error magnitude E[abs(E[x] - x)] is always positive.