Newcomb’s Problem still holds in much more realistic situations. So say someone who knows you really, really well comes up to you and makes the same offer. Imagine you don’t mind taking their money and you reckon they know you well enough that they’re 80% likely to be correct in their bet. One boxing is still the right decision because you have the following gain from one boxing:
(.8 x 1 000 000) + (.2 x 0) = 800 000
and for two boxing:
(.8 x 1000) + (.2 x 1 001 000) = 800+ 200 200 = 201 000
But Causal Decision Theory will still undertake the same reasoning because your decision still doesn’t have a causal influence on whether the boxes are in state 1 or 2. So Causal Decision Theory will still two box.
So Newcomb’s Problem still holds in more realistic situations.
Is that the sort of thing you were looking for or have I missed the point?
Newcomb’s Problem still holds in much more realistic situations. So say someone who knows you really, really well comes up to you and makes the same offer. Imagine you don’t mind taking their money and you reckon they know you well enough that they’re 80% likely to be correct in their bet. One boxing is still the right decision because you have the following gain from one boxing:
(.8 x 1 000 000) + (.2 x 0) = 800 000
and for two boxing:
(.8 x 1000) + (.2 x 1 001 000) = 800+ 200 200 = 201 000
But Causal Decision Theory will still undertake the same reasoning because your decision still doesn’t have a causal influence on whether the boxes are in state 1 or 2. So Causal Decision Theory will still two box.
So Newcomb’s Problem still holds in more realistic situations.
Is that the sort of thing you were looking for or have I missed the point?