In the second reformulation, the answer is clear from Bayes’ rule.
But the relevant event structure is not clear. It’s easy to do the math, but it’s not clear which math should be done. The discussions of Sleeping Beauty a few months back (I think it was) should’ve made it clear that there is little point in postulating probabilities (cousin_it might have a citation ready, I remember he made this point a few times), because it’s mostly a dispute about definitions (of random variables, etc.).
Instead, one should consider specific decision problems and ask about what decisions should be made. Figuring out the decisions might even involve calculating probabilities, but these would be introduced for a clear purpose, so that it’s not merely a matter of definitions and there’s actually a right answer, in the context of a particular method for solving a particular decision problem. While solving different decision problems, we might even encounter different “contradictory” probabilities associated with the same verbal specifications of events.
Considering it as a decision problem is a particular side in the definition/axiom dispute—a side that also corresponds with requiring the probabilities be the frequencies—i.e. if you use the other definitions the probabilities will not be frequencies. So I think the resolution to Sleeping Beauty is even stronger—there is a right side, and a right way to go about the problem.
Assigning constant rewards for correct answers can be compared with assigning constant rewards to each person at the end of the experiment, and these options are (I think) isomorphic to the two ways to look at the problem through probability—the fact that the choice seems more intuitive through the lens of decision theory is a fact about our brains, not the problem.
But I claim it is an inevitable consequence of your suggestion, since the same sort of arguments that might be made about which way of calculating the probability can be made about which utility problem to solve, if you’re doing the same math. Or put another way, you can take the decision-theory result and use it to calculate the rational probabilities, so any stance on using decision theory is a stance on probabilities (if the rewards are fixed).
I think the problem just looks so obvious to us when we use decision theory that we don’t connect it to the non-obvious-seeming dispute over probabilities.
Again, I didn’t suggest trying to reformulate a problem as a decision problem as a way of figuring out which probability to assign. Probability-assignment is not an interesting game. My point was that if you want to understand a problem, understand what’s going on in a given situation, consider some decision problems and try to solve them, instead of pointlessly debating which probabilities to assign (or which decision problems to solve).
Oh, so you don’t think that viewing it as a decision problem clarifies it? Then choosing a decision problem to help answer the question doesn’t seem any more helpful than “make your own decision on the probability problem,” since they’re the same math. This then veers toward the even-more-unhelpful “don’t ask the question.”
Then choosing a decision problem to help answer the question doesn’t seem any more helpful than “make your own decision on the probability problem,” since they’re the same math.
It’s not intended to help with answering the question, no more than dissolving any other definitional debate helps with determining which definition is the better. It’s intended to help with understanding of the thought experiment instead.
Changing the labels on the same math isn’t “dissolving” anything, as it would if probabilities were like the word “sound.” “Sound” goes away when dissolved because it’s subjective and dissolving switches to objective language. Probabilities are uniquely derivable from objective language. Additionally there is no “unaskable question,” at least in typical probability theory—you’d have to propose a fairly extreme revision to get a relevant decision theory answer to not bear on the question of probabilities.
But the relevant event structure is not clear. It’s easy to do the math, but it’s not clear which math should be done. The discussions of Sleeping Beauty a few months back (I think it was) should’ve made it clear that there is little point in postulating probabilities (cousin_it might have a citation ready, I remember he made this point a few times), because it’s mostly a dispute about definitions (of random variables, etc.).
Instead, one should consider specific decision problems and ask about what decisions should be made. Figuring out the decisions might even involve calculating probabilities, but these would be introduced for a clear purpose, so that it’s not merely a matter of definitions and there’s actually a right answer, in the context of a particular method for solving a particular decision problem. While solving different decision problems, we might even encounter different “contradictory” probabilities associated with the same verbal specifications of events.
Considering it as a decision problem is a particular side in the definition/axiom dispute—a side that also corresponds with requiring the probabilities be the frequencies—i.e. if you use the other definitions the probabilities will not be frequencies. So I think the resolution to Sleeping Beauty is even stronger—there is a right side, and a right way to go about the problem.
Considering what as a decision problem? As formulated, we are not given one.
Exactly! :P
Assigning constant rewards for correct answers can be compared with assigning constant rewards to each person at the end of the experiment, and these options are (I think) isomorphic to the two ways to look at the problem through probability—the fact that the choice seems more intuitive through the lens of decision theory is a fact about our brains, not the problem.
You’ve just shifted the definitional debate to deciding which decision problem to use, which was not my suggestion.
But I claim it is an inevitable consequence of your suggestion, since the same sort of arguments that might be made about which way of calculating the probability can be made about which utility problem to solve, if you’re doing the same math. Or put another way, you can take the decision-theory result and use it to calculate the rational probabilities, so any stance on using decision theory is a stance on probabilities (if the rewards are fixed).
I think the problem just looks so obvious to us when we use decision theory that we don’t connect it to the non-obvious-seeming dispute over probabilities.
Again, I didn’t suggest trying to reformulate a problem as a decision problem as a way of figuring out which probability to assign. Probability-assignment is not an interesting game. My point was that if you want to understand a problem, understand what’s going on in a given situation, consider some decision problems and try to solve them, instead of pointlessly debating which probabilities to assign (or which decision problems to solve).
Oh, so you don’t think that viewing it as a decision problem clarifies it? Then choosing a decision problem to help answer the question doesn’t seem any more helpful than “make your own decision on the probability problem,” since they’re the same math. This then veers toward the even-more-unhelpful “don’t ask the question.”
It’s not intended to help with answering the question, no more than dissolving any other definitional debate helps with determining which definition is the better. It’s intended to help with understanding of the thought experiment instead.
Changing the labels on the same math isn’t “dissolving” anything, as it would if probabilities were like the word “sound.” “Sound” goes away when dissolved because it’s subjective and dissolving switches to objective language. Probabilities are uniquely derivable from objective language. Additionally there is no “unaskable question,” at least in typical probability theory—you’d have to propose a fairly extreme revision to get a relevant decision theory answer to not bear on the question of probabilities.