Newcomb’s problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.
Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega’s actions instead of focusing on the problem of decision-making under prediction.
I used to ignore Newcomb’s problem for exactly that reason, until someone pointed out that there’s a mapping to the issue of retaliation. (I called it revenge in the link, but that connotes vigilantism, so retaliation is a better term.) The problem doesn’t require an all-knowing superintelligence, just some predictor with a “pretty darn good” chance of correctly guessing what you’ll do.
In general, it’s applicable to any problem where:
a) Someone else chooses actions based on how they predict you’ll act, and they’re pretty good at predicting.
b) If the predictor predicts you taking the seemingly dominant strategy, they treat you worse.
c) You have to make a choice after “the die is cast” (i.e. the predictor can’t take back their treatment).
Note that in real life, it actually is common for people to a) predict your decisions well, and b) base their treatment of you on that prediction.
ETA: Well, in fairness I should add that life is, shall we say, an iterated game, which takes away a lot of the “die is cast” aspect of it...
Newcomb’s problem is widely accepted as being related to the prisoner’s dilemma. If you 2-box in Newcomb’s problem, you’ll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.
This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it’s not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb’s.
If Newcomb’s problem has important real-world implications, why is it always phrased in terms of a mystical, all-knowing superintelligence?
Newcomb’s problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.
Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega’s actions instead of focusing on the problem of decision-making under prediction.
I used to ignore Newcomb’s problem for exactly that reason, until someone pointed out that there’s a mapping to the issue of retaliation. (I called it revenge in the link, but that connotes vigilantism, so retaliation is a better term.) The problem doesn’t require an all-knowing superintelligence, just some predictor with a “pretty darn good” chance of correctly guessing what you’ll do.
In general, it’s applicable to any problem where:
a) Someone else chooses actions based on how they predict you’ll act, and they’re pretty good at predicting.
b) If the predictor predicts you taking the seemingly dominant strategy, they treat you worse.
c) You have to make a choice after “the die is cast” (i.e. the predictor can’t take back their treatment).
Note that in real life, it actually is common for people to a) predict your decisions well, and b) base their treatment of you on that prediction.
ETA: Well, in fairness I should add that life is, shall we say, an iterated game, which takes away a lot of the “die is cast” aspect of it...
Newcomb’s problem is widely accepted as being related to the prisoner’s dilemma. If you 2-box in Newcomb’s problem, you’ll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.
Omega has much better mind-reading abilities than most PD participants I would think.
This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it’s not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb’s.
ETA: for some payoff matrices.