The counterfactual mugging isn’t that strange if you think of it as a form of entrance fee for a positive-expected-utility bet—a bet you happened to lose in this instance, but it is good to have the decision theory that will allow you to enter it in the abstract.
The problem is that people aren’t that good in understanding that your specific decision isn’t separate from your decision theory under a specific context … DecisionTheory(Context)=Decision. To have your decision theory be a winning decision theory in general, you may have to eventually accept some individual ‘losing’ decisions: That’s the price to pay for having a winning decision theory overall.
If Parfit’s hitchhiker “updates” on the fact that he’s now reached the city and therefore doesn’t need to pay the driver, and furthermore if Parfit’s hitchhiker knows in advance that he’ll update on that fact in that manner, then he’ll die.
If right now we had mindscanners/simulators that could perform such counterfactual experiments on our minds, and if this sort of bet could therefore become part of everyday existence, being the sort of person that pays the counterfactual mugger would eventually be seen by all to be of positive-utility—because such people would eventually be offered the winning side of that bet (free money in the tenfold of your cost).
While the sort of person that wouldn’t be paying the counterfactual mugger would never be given such free money at all.
The likelihood of encountering the winning side of the bet is proportional to the likelihood of encountering its losing side. As such, whether you are likely to encounter the bet once in your lifetime, or to encounter it a hundred times, doesn’t seem to significantly affect the decision theory you ought possess in advance if you want to maximize your utility.
In addition to Omega asking you to give him 100$ because the coin came up tails, also imagine Omega coming to you and saying “Here’s 100,000$, because the coin came up heads and you’re the type of person that would have given me 100$ if it had come up tails.”
That scenario makes it obvious to me that being the person that would give Omega 100$ if it had come up heads is the winning type of person...
The counterfactual mugging isn’t that strange if you think of it as a form of entrance fee for a positive-expected-utility bet—a bet you happened to lose in this instance, but it is good to have the decision theory that will allow you to enter it in the abstract.
The problem is that people aren’t that good in understanding that your specific decision isn’t separate from your decision theory under a specific context … DecisionTheory(Context)=Decision. To have your decision theory be a winning decision theory in general, you may have to eventually accept some individual ‘losing’ decisions: That’s the price to pay for having a winning decision theory overall.
I doubt that a decision theory that simply refuses to update on certain forms of evidence can win consistently.
If Parfit’s hitchhiker “updates” on the fact that he’s now reached the city and therefore doesn’t need to pay the driver, and furthermore if Parfit’s hitchhiker knows in advance that he’ll update on that fact in that manner, then he’ll die.
If right now we had mindscanners/simulators that could perform such counterfactual experiments on our minds, and if this sort of bet could therefore become part of everyday existence, being the sort of person that pays the counterfactual mugger would eventually be seen by all to be of positive-utility—because such people would eventually be offered the winning side of that bet (free money in the tenfold of your cost).
While the sort of person that wouldn’t be paying the counterfactual mugger would never be given such free money at all.
If, and only if, you regularly encounter such bets.
The likelihood of encountering the winning side of the bet is proportional to the likelihood of encountering its losing side. As such, whether you are likely to encounter the bet once in your lifetime, or to encounter it a hundred times, doesn’t seem to significantly affect the decision theory you ought possess in advance if you want to maximize your utility.
In addition to Omega asking you to give him 100$ because the coin came up tails, also imagine Omega coming to you and saying “Here’s 100,000$, because the coin came up heads and you’re the type of person that would have given me 100$ if it had come up tails.”
That scenario makes it obvious to me that being the person that would give Omega 100$ if it had come up heads is the winning type of person...