I wonder how many people come up with this answer.
I’ve come up with a related answer with the past, but I don’t think that defense is the best angle to take anymore when it comes to Newcomb’s.
Am I missing something?
It helps to be very specific with why you’re rejecting a thought experiment. The statement “Omega doesn’t exist in reality” needs to be traced to the axioms that give you an impossibility proof. This both allows you to update your conclusion as soon as those axioms come into question and generalize from those axioms to other situations.
For example, the ‘frailty’ approach to Newcomb’s is to say “given that 1) my prior probability of insanity is higher than my prior probability of Omega and 2) any evidence for Omega’s supernatural ability is at least as strong evidence for my insanity, I can’t reach a state where I think that it’s more likely that Omega has supernatural powers than that I’m insane.” This generalizes to, say, claims from con men; you might think that any evidence they present for their claims is also evidence for their untrustworthiness, and reach a point where you literally can’t believe them. (Is this a good state to be in?) But it’s not clear that 2 is true, and even if the conclusion follows through, it helps to have a decision theory for what to do when you think you’re insane!
Another approach to Newcomb’s problem is to get very specific about what we mean by ‘causality,’ because Newcomb’s is a situation where we have a strong verbal argument that causality shouldn’t exist and a strong verbal argument that causality should exist. In order to resolve the argument, we need to figure out what causality means mathematically, and then we can generalize much more broadly, and the time spent formalizing causality is not at all wasted.
Thanks for your reply. I didn’t expect to get so much feedback.
I tend to assume that I am not insane. Maybe I am overconfident in that regard :-)
I would call my approach to Newcomb’s problem an example of rational ignorance. I think the cost of thinking about this problem (my time) is higher than the possible benefit I could get out of it.
Depends. Do you generally think that thought experiments involving fictional/nonexistent entities are irrelevant (to what?) and not worth thinking about? Or is there something special about Newcomb’s problem?
If the former, yes, I think you’re missing something. If the latter, then you might not be missing anything.
I think it’s only Newcomb’s problem in particular. I just can’t imagine how 1) knowing the right answer to this problem or 2) thinking about it can improve my life or that of any other person in any way.
I was reading quite recently, but I can’t remember where (LessWrong itself?) (ETA: yes, here and on So8res’ blog), someone saying Newcomb-like problems are the rule in social interactions. Every time you deal with someone who is trying to predict what you are going to do and might be better at it than you, you have a Newcomb-like problem. If you just make what seems to you like the obviously better decision, the other person may have anticipated that and made that choice appear deceptively better for you.
“Hey, check out this great offer I received! Of course, these things are scams, but I just can’t see how this one could be bad!”
“Dude, you’re wondering whether you should do exactly what a con artist has asked you to do?”
Now and then some less technically-minded friend will ask my opinion about a piece of dodgy email they received. My answer always begins, “IT’S A SCAM. IT’S ALWAYS A SCAM.”
Newcomb’s Problem reduces the situation to its bare essentials. A decision theory that two-boxes may not be much use for an AGI, or for a person.
(nods) And how would you characterize Newcomb’s problem?
For example, I would characterize it as raising questions about how to behave in situations where our own behaviors can reliably (though imperfectly) be predicted by another agent.
Imagine a different set of players. For example, some software which is capable of modifying its own code (that’s nothing out of the ordinary, such things exist) and a programmer capable of examining that code.
Not the one in question, though, since Omega can be approximated—and typically is, even if only as a (50+x)% correct predictor. Humans are an approximation of Omega, in some sense. Solving a problem assuming a hypothetical Omega is not unlike assuming cows are spheres in a vacuum, i.e. a solution of the idealized thought experiment can still be relevant.
Here is my answer to Newcomb’s problem:
Omega doesn’t exist in reality. Therefore Newcomb’s problem is irrelevant and I don’t waste time thinking about it.
I wonder how many people come up with this answer. Most of them are probably smarter than me and also don’t waste time commenting their opinion.
Am I missing something?
I’ve come up with a related answer with the past, but I don’t think that defense is the best angle to take anymore when it comes to Newcomb’s.
It helps to be very specific with why you’re rejecting a thought experiment. The statement “Omega doesn’t exist in reality” needs to be traced to the axioms that give you an impossibility proof. This both allows you to update your conclusion as soon as those axioms come into question and generalize from those axioms to other situations.
For example, the ‘frailty’ approach to Newcomb’s is to say “given that 1) my prior probability of insanity is higher than my prior probability of Omega and 2) any evidence for Omega’s supernatural ability is at least as strong evidence for my insanity, I can’t reach a state where I think that it’s more likely that Omega has supernatural powers than that I’m insane.” This generalizes to, say, claims from con men; you might think that any evidence they present for their claims is also evidence for their untrustworthiness, and reach a point where you literally can’t believe them. (Is this a good state to be in?) But it’s not clear that 2 is true, and even if the conclusion follows through, it helps to have a decision theory for what to do when you think you’re insane!
Another approach to Newcomb’s problem is to get very specific about what we mean by ‘causality,’ because Newcomb’s is a situation where we have a strong verbal argument that causality shouldn’t exist and a strong verbal argument that causality should exist. In order to resolve the argument, we need to figure out what causality means mathematically, and then we can generalize much more broadly, and the time spent formalizing causality is not at all wasted.
Thanks for your reply. I didn’t expect to get so much feedback.
I tend to assume that I am not insane. Maybe I am overconfident in that regard :-)
I would call my approach to Newcomb’s problem an example of rational ignorance. I think the cost of thinking about this problem (my time) is higher than the possible benefit I could get out of it.
Depends. Do you generally think that thought experiments involving fictional/nonexistent entities are irrelevant (to what?) and not worth thinking about? Or is there something special about Newcomb’s problem?
If the former, yes, I think you’re missing something. If the latter, then you might not be missing anything.
Thanks for this answer.
I think it’s only Newcomb’s problem in particular. I just can’t imagine how 1) knowing the right answer to this problem or 2) thinking about it can improve my life or that of any other person in any way.
I was reading quite recently, but I can’t remember where (LessWrong itself?) (ETA: yes, here and on So8res’ blog), someone saying Newcomb-like problems are the rule in social interactions. Every time you deal with someone who is trying to predict what you are going to do and might be better at it than you, you have a Newcomb-like problem. If you just make what seems to you like the obviously better decision, the other person may have anticipated that and made that choice appear deceptively better for you.
“Hey, check out this great offer I received! Of course, these things are scams, but I just can’t see how this one could be bad!”
“Dude, you’re wondering whether you should do exactly what a con artist has asked you to do?”
Now and then some less technically-minded friend will ask my opinion about a piece of dodgy email they received. My answer always begins, “IT’S A SCAM. IT’S ALWAYS A SCAM.”
Newcomb’s Problem reduces the situation to its bare essentials. A decision theory that two-boxes may not be much use for an AGI, or for a person.
(nods)
And how would you characterize Newcomb’s problem?
For example, I would characterize it as raising questions about how to behave in situations where our own behaviors can reliably (though imperfectly) be predicted by another agent.
Imagine a different set of players. For example, some software which is capable of modifying its own code (that’s nothing out of the ordinary, such things exist) and a programmer capable of examining that code.
Yes, you’re missing something. You’re fighting the hypothetical.
Some hypotheticals are worth fighting. What’s the right accounting policy if 1=2? If 1=2, you have bigger problems.
Not the one in question, though, since Omega can be approximated—and typically is, even if only as a (50+x)% correct predictor. Humans are an approximation of Omega, in some sense. Solving a problem assuming a hypothetical Omega is not unlike assuming cows are spheres in a vacuum, i.e. a solution of the idealized thought experiment can still be relevant.