Newcomb’s problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.
Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega’s actions instead of focusing on the problem of decision-making under prediction.
I used to ignore Newcomb’s problem for exactly that reason, until someone pointed out that there’s a mapping to the issue of retaliation. (I called it revenge in the link, but that connotes vigilantism, so retaliation is a better term.) The problem doesn’t require an all-knowing superintelligence, just some predictor with a “pretty darn good” chance of correctly guessing what you’ll do.
In general, it’s applicable to any problem where:
a) Someone else chooses actions based on how they predict you’ll act, and they’re pretty good at predicting.
b) If the predictor predicts you taking the seemingly dominant strategy, they treat you worse.
c) You have to make a choice after “the die is cast” (i.e. the predictor can’t take back their treatment).
Note that in real life, it actually is common for people to a) predict your decisions well, and b) base their treatment of you on that prediction.
ETA: Well, in fairness I should add that life is, shall we say, an iterated game, which takes away a lot of the “die is cast” aspect of it...
Newcomb’s problem is widely accepted as being related to the prisoner’s dilemma. If you 2-box in Newcomb’s problem, you’ll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.
This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it’s not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb’s.
The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.
There’s nothing wrong with the topic. Whether it turns out to be a good LW post probably depends on whether anyone contributes any substantially non-obvious advice.
I think the original question is valid as such, but there are tons of valid questions that could be asked in similar manner. What I think the article lacks is some insight or just some effort in trying to understand the problem more deeply. Insights don’t have to be ground-breaking, but I think articles around here should provide some value to the reader. Now it seems more like a “hey guys, what do you think of free will?” type of query.
I suspect that if you would spend some time and effort to try to pin-point the exact problem or perhaps to generalize the problem (or whatever), it might lead you into interesting insights. Let’s say through this process you come up with a heuristic or principle for this problem. If the article provided that, it might have some value to the reader and by the virtue of being more specific, it could also spark up interesting discussion. Now it just seems way too open-ended question. Not that it cannot be answered, but that it doesn’t inspire commentary.
(As an example, perhaps you could have expanded on the opening metaphor. I don’t know if it would have lead anywhere interesting, but one never knows.)
I agree and admit laziness on my part for hoping someone else to insightfully reflect on my problem instead of offering at least a minimum of a solution to start things off. Ironically, I can’t seem t make time to analyze how I can make more time!
I think this post is pretty off-topic...
There’s been so much here lately on things like Newcomb and whatnot, we could do with some more normal threads...
I think there should be MORE Newcomb threads! It has very important real-world implications, which are left as an exercise for the reader.
If Newcomb’s problem has important real-world implications, why is it always phrased in terms of a mystical, all-knowing superintelligence?
Newcomb’s problem is applicable to the general class of game-type problems where the other players try to guess your actions. As far as I can tell, the only reason to introduce Omega is to avoid having to deal with messy, complicated probability estimates from the other players.
Unfortunately, in a forum where the idea that Omega could actually exist is widely accepted, people get caught up in trying to predict Omega’s actions instead of focusing on the problem of decision-making under prediction.
I used to ignore Newcomb’s problem for exactly that reason, until someone pointed out that there’s a mapping to the issue of retaliation. (I called it revenge in the link, but that connotes vigilantism, so retaliation is a better term.) The problem doesn’t require an all-knowing superintelligence, just some predictor with a “pretty darn good” chance of correctly guessing what you’ll do.
In general, it’s applicable to any problem where:
a) Someone else chooses actions based on how they predict you’ll act, and they’re pretty good at predicting.
b) If the predictor predicts you taking the seemingly dominant strategy, they treat you worse.
c) You have to make a choice after “the die is cast” (i.e. the predictor can’t take back their treatment).
Note that in real life, it actually is common for people to a) predict your decisions well, and b) base their treatment of you on that prediction.
ETA: Well, in fairness I should add that life is, shall we say, an iterated game, which takes away a lot of the “die is cast” aspect of it...
Newcomb’s problem is widely accepted as being related to the prisoner’s dilemma. If you 2-box in Newcomb’s problem, you’ll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.
Omega has much better mind-reading abilities than most PD participants I would think.
This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it’s not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb’s.
ETA: for some payoff matrices.
After all, working them out yourself is equivalent to oneboxing.
I agree, but “normal” threads on LW are not supposed to be just normal threads.
The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.
There’s nothing wrong with the topic. Whether it turns out to be a good LW post probably depends on whether anyone contributes any substantially non-obvious advice.
I think the original question is valid as such, but there are tons of valid questions that could be asked in similar manner. What I think the article lacks is some insight or just some effort in trying to understand the problem more deeply. Insights don’t have to be ground-breaking, but I think articles around here should provide some value to the reader. Now it seems more like a “hey guys, what do you think of free will?” type of query.
I suspect that if you would spend some time and effort to try to pin-point the exact problem or perhaps to generalize the problem (or whatever), it might lead you into interesting insights. Let’s say through this process you come up with a heuristic or principle for this problem. If the article provided that, it might have some value to the reader and by the virtue of being more specific, it could also spark up interesting discussion. Now it just seems way too open-ended question. Not that it cannot be answered, but that it doesn’t inspire commentary.
(As an example, perhaps you could have expanded on the opening metaphor. I don’t know if it would have lead anywhere interesting, but one never knows.)
I agree and admit laziness on my part for hoping someone else to insightfully reflect on my problem instead of offering at least a minimum of a solution to start things off. Ironically, I can’t seem t make time to analyze how I can make more time!
I disagree.