In what way is Newcomb’s Problem “anti-causality”?
If you don’t like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don’t have cash on you, so you tell the shopkeeper you’ll pay him tomorrow. If he thinks you’re telling the truth, he’ll give you the item now and let you come back tomorrow. If not, you lose a day’s worth of use, and so some utility.
So your best bet (if you’re selfish) is to tell him you’ll pay tomorrow, take the item, and never come back. But what if you’re a bad liar? Then you’ll blush or stammer or whatever, and you won’t get your good.
A regular Causal agent, however, having taken the item, will not come back the next day—and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions—a TDT agent, or a CDT agent with some pre-commitment system.
The above has the same attitude to causality as Newcomb’s Problem—specifically, it includes another agent rewarding you based that agent’s calculations of your future behaviour. But it’s a situation I’ve been in several times.
In what way is Newcomb’s Problem “anti-causality”?
If you don’t like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don’t have cash on you, so you tell the shopkeeper you’ll pay him tomorrow. If he thinks you’re telling the truth, he’ll give you the item now and let you come back tomorrow. If not, you lose a day’s worth of use, and so some utility.
So your best bet (if you’re selfish) is to tell him you’ll pay tomorrow, take the item, and never come back. But what if you’re a bad liar? Then you’ll blush or stammer or whatever, and you won’t get your good.
A regular Causal agent, however, having taken the item, will not come back the next day—and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions—a TDT agent, or a CDT agent with some pre-commitment system.
The above has the same attitude to causality as Newcomb’s Problem—specifically, it includes another agent rewarding you based that agent’s calculations of your future behaviour. But it’s a situation I’ve been in several times.
EDIT: Grammar.
This example is much like Parfit’s Hitchhiker in less extreme form.