The Sleeping Beauty scenario is problematic to discuss because it’s posed as a question about probabilities rather than utilities. Let’s consider Parfit’s Hitchhiker instead. If you’d like some concrete numbers, suppose you get 0 utility if you’re left in the desert, 10 if you’re taken back to civilisation, but then lose 1 if you have to pay. So the utilities in the ‘Util’ boxes on my diagram are 9, 10, 0, in that order.
Now, if you have an opportunity to act at all, then you can say with certainty where you are in the tree-diagram: you’re at the one-and-only Player node. This corresponds to “I’ve already been taken to my destination, and now I need to decide whether to pay the driver.” Conditional upon being at that node, it’s obvious that you maximise your utility by not paying (10 instead of 9).
However, if you make no assumptions about ‘the state of the world’ (i.e. whether or not you were offered a ride) and ask “Which of the two strategies maximizes my expected utility at the outset?” then the strategy where you pay up will get utility 9, and the one that doesn’t will get 0.
So looking at the unconditional expected utility basically means that you deliberately ‘forget’ the information you have about where you are in the game and just look for “a strategy for the blue box” that will maximize your utility over many start-to-finish iterations of the game.
I don’t know where the probabilities are supposed to be in that graphical model, so I don’t know how to apply my understanding of “expectation”. I’m not even sure what I’m supposed to be uncertain about, so I’m not sure how to apply my understanding of “probability”.
I don’t know what the semantics of nodes and arrows are, either. Labeling the arrows and the “Util” boxes would help.
The Sleeping Beauty scenario is problematic to discuss
That might justify removing it from the OP, or at least moving it out of the critical path across the inferential distance.
because it’s posed as a question about probabilities rather than utilities
It isn’t clear how you can discuss expectations without discussing probabilities?
In the case of Newcomb’s Problem—if Omega is only assumed to have some finite accuracy, say .9 - I can at least start to see how to make it about probabilities and expectations. I’ll take a shot at it sometime.
The Sleeping Beauty scenario is problematic to discuss because it’s posed as a question about probabilities rather than utilities. Let’s consider Parfit’s Hitchhiker instead. If you’d like some concrete numbers, suppose you get 0 utility if you’re left in the desert, 10 if you’re taken back to civilisation, but then lose 1 if you have to pay. So the utilities in the ‘Util’ boxes on my diagram are 9, 10, 0, in that order.
Now, if you have an opportunity to act at all, then you can say with certainty where you are in the tree-diagram: you’re at the one-and-only Player node. This corresponds to “I’ve already been taken to my destination, and now I need to decide whether to pay the driver.” Conditional upon being at that node, it’s obvious that you maximise your utility by not paying (10 instead of 9).
However, if you make no assumptions about ‘the state of the world’ (i.e. whether or not you were offered a ride) and ask “Which of the two strategies maximizes my expected utility at the outset?” then the strategy where you pay up will get utility 9, and the one that doesn’t will get 0.
So looking at the unconditional expected utility basically means that you deliberately ‘forget’ the information you have about where you are in the game and just look for “a strategy for the blue box” that will maximize your utility over many start-to-finish iterations of the game.
I don’t know where the probabilities are supposed to be in that graphical model, so I don’t know how to apply my understanding of “expectation”. I’m not even sure what I’m supposed to be uncertain about, so I’m not sure how to apply my understanding of “probability”.
I don’t know what the semantics of nodes and arrows are, either. Labeling the arrows and the “Util” boxes would help.
That might justify removing it from the OP, or at least moving it out of the critical path across the inferential distance.
It isn’t clear how you can discuss expectations without discussing probabilities?
In the case of Newcomb’s Problem—if Omega is only assumed to have some finite accuracy, say .9 - I can at least start to see how to make it about probabilities and expectations. I’ll take a shot at it sometime.