As far as I know, every argument for utility assumes (or implies) that whenever you make an observation, you stop caring about the possible worlds where that observation went differently.
I read this comment years ago and I’m reflecting upon this again.
What I/Cousin_it did in the Counterfactual Prisoner’s Dilemma is essentially construct a situation where you can symmetrically burn a lot of value in other counterfactual case by refusing to give up a trivial amount of value. If you don’t care about the other world, you’d press such a button if it could exist. Now someone might be skeptical about the possibility of such a button because they’re doubtful about a perfect predictor, but if. this doubt were removed then Counterfactual Prisoner’s Dilemma would bite. In fact, I would argue that it would be quite surprising if a proposed decision theory were to fail for perfect predictors without having deeper issues:
The Original Counterfactual Prisoner’s Dilemma: Omega, a perfect predictor, flips a coin and tell you how it came up. If it comes up heads, Omega asks you for $100, then pays you $10,000 if it predict you would have paid if it had come up tails. If it comes up tails, Omega asks you for $100, then pays you $10,000 if it predicts you would have paid if it had come up heads. In this case it was heads and it made its prediction before your decision.
Honestly, I wish I’d defined it more like the actual prisoner’s dilemma as that would have made the thought experiment cleaner:
The Refined Counterfactual Prisoner’s Dilemma: Omega, a perfect predictor, flips a coin. Later on, Omega explains the scenario, including the result of the coin flip and details that are yet to come, and asks you for $1. Turns out that before came to speak to you, it made a prediction about what you would have chosen if the coin had come up the other way. If it predicted earlier that you wouldn’t have paid, the scenario finishes with Omega inflicting $1 million dollars worth of damage on you.
I read this comment years ago and I’m reflecting upon this again.
What I/Cousin_it did in the Counterfactual Prisoner’s Dilemma is essentially construct a situation where you can symmetrically burn a lot of value in other counterfactual case by refusing to give up a trivial amount of value. If you don’t care about the other world, you’d press such a button if it could exist. Now someone might be skeptical about the possibility of such a button because they’re doubtful about a perfect predictor, but if. this doubt were removed then Counterfactual Prisoner’s Dilemma would bite. In fact, I would argue that it would be quite surprising if a proposed decision theory were to fail for perfect predictors without having deeper issues:
Honestly, I wish I’d defined it more like the actual prisoner’s dilemma as that would have made the thought experiment cleaner: