I don’t know where the probabilities are supposed to be in that graphical model, so I don’t know how to apply my understanding of “expectation”. I’m not even sure what I’m supposed to be uncertain about, so I’m not sure how to apply my understanding of “probability”.
I don’t know what the semantics of nodes and arrows are, either. Labeling the arrows and the “Util” boxes would help.
The Sleeping Beauty scenario is problematic to discuss
That might justify removing it from the OP, or at least moving it out of the critical path across the inferential distance.
because it’s posed as a question about probabilities rather than utilities
It isn’t clear how you can discuss expectations without discussing probabilities?
In the case of Newcomb’s Problem—if Omega is only assumed to have some finite accuracy, say .9 - I can at least start to see how to make it about probabilities and expectations. I’ll take a shot at it sometime.
I don’t know where the probabilities are supposed to be in that graphical model, so I don’t know how to apply my understanding of “expectation”. I’m not even sure what I’m supposed to be uncertain about, so I’m not sure how to apply my understanding of “probability”.
I don’t know what the semantics of nodes and arrows are, either. Labeling the arrows and the “Util” boxes would help.
That might justify removing it from the OP, or at least moving it out of the critical path across the inferential distance.
It isn’t clear how you can discuss expectations without discussing probabilities?
In the case of Newcomb’s Problem—if Omega is only assumed to have some finite accuracy, say .9 - I can at least start to see how to make it about probabilities and expectations. I’ll take a shot at it sometime.