The problem with the Sleeping Beauty problem, more specifically, is that the intuitive translation of it into rules like P(woken up on Tuesday and Interviewed | experimenter got tails) = 1 violates normalization on the “number of observers,” or the “probability she is interviewed at all.”
Which is what you’d expect, since she can get interviewed more times than she remembers. But still, it violates normalization.
The 1⁄3, 2⁄3 solution corresponds to stating the problem intuitively and computing to the end, ignoring the normalization-breaking.
The 1⁄2, 1⁄2 solution corresponds to restoring normalization either of the two most obvious ways and not thinking too hard about the unintuitive results.
I agree that translating the problem into a slightly different framework can help you resolve it, but I don’t think it’s necessary to generalize as much as you did or bring in decision theory. All we need to do is embrace frequentist statistics :D
That is to say, what you do is calculate expected frequencies rather than probabilities. Manipulating these lets you “hide” the normalization-breaking in a mathematically acceptable way, since an expected frequency of 1.5 doesn’t have the same problems a probability of 1.5 does.
The problem with the Sleeping Beauty problem, more specifically, is that the intuitive translation of it into rules like P(woken up on Tuesday and Interviewed | experimenter got tails) = 1 violates normalization on the “number of observers,” or the “probability she is interviewed at all.”
Which is what you’d expect, since she can get interviewed more times than she remembers. But still, it violates normalization.
The 1⁄3, 2⁄3 solution corresponds to stating the problem intuitively and computing to the end, ignoring the normalization-breaking.
The 1⁄2, 1⁄2 solution corresponds to restoring normalization either of the two most obvious ways and not thinking too hard about the unintuitive results.
I agree that translating the problem into a slightly different framework can help you resolve it, but I don’t think it’s necessary to generalize as much as you did or bring in decision theory. All we need to do is embrace frequentist statistics :D
That is to say, what you do is calculate expected frequencies rather than probabilities. Manipulating these lets you “hide” the normalization-breaking in a mathematically acceptable way, since an expected frequency of 1.5 doesn’t have the same problems a probability of 1.5 does.
Definitely an odd problem, though.