So, is it reasonable to pre-commit to giving the $100 in the counterfactual mugging game? (Pre-commitment is one solution to the Newcomb problem.) On first glance, it seems that a pre-commitment will work.
But now consider “counter-counterfactual mugging”. In this game, Omega meets me and scans my brain. If it finds that I’ve pre-committed to handing over the $s in the counterfactual mugging game, then it empties my bank account. If I haven’t pre-committed to doing anything in counterfactual mugging, then it rewards me with $1 million. Damn.
So what should I pre-commit to doing, if anything? Should I somehow try to assess my likelihood of meeting Omega (in some form or other) and guess what sort of parlour game it is likely to play with me, and for what stakes? Has anyone got any idea how to do that assessment, without unduly privileging the games that we happen to have thought of so far? This way madness lies I fear...
The interest with these Omega games is that we don’t meet actual Omegas, but do meet each other, and the effects are sometimes rather similar. We do like the thought of friends who’ll give us $1000 if we really need it (say in a once-in-a-lifetime emergency, with no likelihood of reciprocity) because they believe we’d do the same for them if they really needed it. We don’t want to call that behaviour irrational. Isn’t that the real point here?
Should I somehow try to assess my likelihood of meeting Omega (in some form or other) and guess what sort of parlour game it is likely to play with me, and for what stakes? Has anyone got any idea how to do that assessment, without unduly privileging the games that we happen to have thought of so far? This way madness lies I fear...
Not exactly madness, but Pascal’s wager. If you haven’t seen any evidence of Omega existing by now, nor any theory behind how predictions such as his could be possible, and word of his parlour game preferences has not reached you, then chances are that he is so unlikely in this universe that he is in the same category as Pascal’s wager.
There is one nice thing about the real-world friend case, which is that you actually might be in the reverse situation later. So it’s not just a counterfactual you’re considering; it’s a real future possibility.
Take that away and it’s more like Omega; but then it’s not the real-world problem anymore!
So, is it reasonable to pre-commit to giving the $100 in the counterfactual mugging game? (Pre-commitment is one solution to the Newcomb problem.) On first glance, it seems that a pre-commitment will work.
But now consider “counter-counterfactual mugging”. In this game, Omega meets me and scans my brain. If it finds that I’ve pre-committed to handing over the $s in the counterfactual mugging game, then it empties my bank account. If I haven’t pre-committed to doing anything in counterfactual mugging, then it rewards me with $1 million. Damn.
So what should I pre-commit to doing, if anything? Should I somehow try to assess my likelihood of meeting Omega (in some form or other) and guess what sort of parlour game it is likely to play with me, and for what stakes? Has anyone got any idea how to do that assessment, without unduly privileging the games that we happen to have thought of so far? This way madness lies I fear...
The interest with these Omega games is that we don’t meet actual Omegas, but do meet each other, and the effects are sometimes rather similar. We do like the thought of friends who’ll give us $1000 if we really need it (say in a once-in-a-lifetime emergency, with no likelihood of reciprocity) because they believe we’d do the same for them if they really needed it. We don’t want to call that behaviour irrational. Isn’t that the real point here?
Not exactly madness, but Pascal’s wager. If you haven’t seen any evidence of Omega existing by now, nor any theory behind how predictions such as his could be possible, and word of his parlour game preferences has not reached you, then chances are that he is so unlikely in this universe that he is in the same category as Pascal’s wager.
There is one nice thing about the real-world friend case, which is that you actually might be in the reverse situation later. So it’s not just a counterfactual you’re considering; it’s a real future possibility.
Take that away and it’s more like Omega; but then it’s not the real-world problem anymore!