Due to my math background, the thesis read like total gibberish. Tons and tons of not even wrong, like the philosophical tomes written on the unexpected hanging paradox before the logical contradiction due to self-reference was pointed out.
But one passage stood out as meaningful:
the predictor just has to be a little bit better than chance for Newcomb’s problem to arise… One doesn’t need a good psychologist for that. A friend who knows the decision maker well is enough.
This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse. (Case 3 “Terminating Omega” in my post.) This indicates the possibility that the problem statement may be a self-contradictory lie, just like the setup of the unexpected hanging paradox. Of course, the amount of computation needed to bring out the contradiction depends on how much mystical power you award to Omega.
I apologize for getting on my horse here. This discussion should come to an end somehow.
This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.
I think this reply is also illuminating: the stated goal in Newcomb’s problem is to maximize your financial return. If your goal is make Omega have predicted wrongly, you are solving a different problem.
I do agree that the problem may be subtly self-contradictory. Could you point me to your preferred writeup of the Unexpected Hanging Paradox?
Uh, Omega has no business deciding what problem I’m solving.
Could you point me to your preferred writeup of the Unexpected Hanging Paradox?
The solution I consider definitively correct is outlined on the Wikipedia page, but simple enough to be expressed here. The judge actually says “you can’t deduce the day you’ll be hanged, even if you use this statement as an axiom too”. This phrase is self-referential, like the phrase “this statement is false”. Although not all self-referential statements are self-contradictory, this one turns out to be. The proof of self-contradiction simply follows the prisoner’s reasoning. This line of attack seems to have been first rigorously formalized by Fitch, “A Goedelized formulation of the prediction paradox”, can’t find the full text online. And that’s all there is to it.
I’m not solving it in the sense of utility maximization. I’m solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.
Maximising your financial return entails that you make omega’s prediction wrong, if you can get it to predict that you one box when you actually two box, you maximise your financial return.
Well, it had better not be predictable that you’re going to try that. I mean, at the point where Omega realizes, “Hey, this guy is going to try an elaborate clever strategy to get me to fill box B and then two-box” It’s pretty much got you pegged.
I never said it was easy thing to do. I just meant that situation is the maximum if it is reachable. Which depends upon the implementation of Omega in the real world.
My point is merely that getting Omega to predict wrong is easy (flip a coin). Getting an expectation value higher than $1 million is what’s hard (and likely impossible, if Omega is much smarter than you, as Eliezer says above).
To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse.
I believe this can be made consistent. Your first friend will predict that you will ask your second friend. Your second friend will predict that you will do the opposite of whatever they say, and so won’t be able to predict anything. If you ever choose, you’ll have to fall back on something consistent, which your first friend will consequently predict.
If you force 2F to make some arbitrary prediction, though, then if 1F can predict 2F’s prediction, 1F will predict you’ll do the opposite. If 1F can’t do that, he’ll do whatever he would do if you used a quantum randomizer (I believe this is usually said to be not putting anything in the box).
You have escalated the mystical power of Omega—surely it’s no longer just a human friend who knows you well—supporting my point about the quoted passage. If your new Omegas aren’t yet running full simulations (a case resolved by indexical uncertainty) but rather some kind of coarse-grained approximations, then I should have enough sub-pixel and off-scene freedom to condition my action on 2F’s response with neither 1F nor 2F knowing it. If you have some other mechanism of how Omega might work, please elaborate: I need to understand an Omega to screw it up.
To determine exactly how to screw with your Omega, I need to understand what it does. If it’s running something less than a full simulation, something coarse-grained, I can exploit it: condition on a sub-pixel or off-scene detail. (The full simulation scenario is solved by indexical uncertainty.) In the epic thread no one has yet produced a demystified Omega that can’t be screwed with. Taboo “predict” and explain.
Due to my math background, the thesis read like total gibberish. Tons and tons of not even wrong, like the philosophical tomes written on the unexpected hanging paradox before the logical contradiction due to self-reference was pointed out.
But one passage stood out as meaningful:
the predictor just has to be a little bit better than chance for Newcomb’s problem to arise… One doesn’t need a good psychologist for that. A friend who knows the decision maker well is enough.
This passage is instructively wrong. To screw with such an Omega, just ask a different friend who knows you equally well, take their judgement and do the reverse. (Case 3 “Terminating Omega” in my post.) This indicates the possibility that the problem statement may be a self-contradictory lie, just like the setup of the unexpected hanging paradox. Of course, the amount of computation needed to bring out the contradiction depends on how much mystical power you award to Omega.
I apologize for getting on my horse here. This discussion should come to an end somehow.
I think this reply is also illuminating: the stated goal in Newcomb’s problem is to maximize your financial return. If your goal is make Omega have predicted wrongly, you are solving a different problem.
I do agree that the problem may be subtly self-contradictory. Could you point me to your preferred writeup of the Unexpected Hanging Paradox?
Uh, Omega has no business deciding what problem I’m solving.
The solution I consider definitively correct is outlined on the Wikipedia page, but simple enough to be expressed here. The judge actually says “you can’t deduce the day you’ll be hanged, even if you use this statement as an axiom too”. This phrase is self-referential, like the phrase “this statement is false”. Although not all self-referential statements are self-contradictory, this one turns out to be. The proof of self-contradiction simply follows the prisoner’s reasoning. This line of attack seems to have been first rigorously formalized by Fitch, “A Goedelized formulation of the prediction paradox”, can’t find the full text online. And that’s all there is to it.
No, but if you’re solving something other than Newcomb’s problem, why discuss it on this post?
I’m not solving it in the sense of utility maximization. I’m solving it in the sense of demonstrating that the input conditions might well be self-contradictory, using any means available.
Okay yes, I see what you’re trying to do and the comment is retracted.
Maximising your financial return entails that you make omega’s prediction wrong, if you can get it to predict that you one box when you actually two box, you maximise your financial return.
Well, it had better not be predictable that you’re going to try that. I mean, at the point where Omega realizes, “Hey, this guy is going to try an elaborate clever strategy to get me to fill box B and then two-box” It’s pretty much got you pegged.
That’s not so—the “elaborate clever strategy” does include a chance that you’ll one-box. What does the payoff matrix look like from Omega’s side?
I never said it was easy thing to do. I just meant that situation is the maximum if it is reachable. Which depends upon the implementation of Omega in the real world.
My point is merely that getting Omega to predict wrong is easy (flip a coin). Getting an expectation value higher than $1 million is what’s hard (and likely impossible, if Omega is much smarter than you, as Eliezer says above).
I believe this can be made consistent. Your first friend will predict that you will ask your second friend. Your second friend will predict that you will do the opposite of whatever they say, and so won’t be able to predict anything. If you ever choose, you’ll have to fall back on something consistent, which your first friend will consequently predict.
If you force 2F to make some arbitrary prediction, though, then if 1F can predict 2F’s prediction, 1F will predict you’ll do the opposite. If 1F can’t do that, he’ll do whatever he would do if you used a quantum randomizer (I believe this is usually said to be not putting anything in the box).
You have escalated the mystical power of Omega—surely it’s no longer just a human friend who knows you well—supporting my point about the quoted passage. If your new Omegas aren’t yet running full simulations (a case resolved by indexical uncertainty) but rather some kind of coarse-grained approximations, then I should have enough sub-pixel and off-scene freedom to condition my action on 2F’s response with neither 1F nor 2F knowing it. If you have some other mechanism of how Omega might work, please elaborate: I need to understand an Omega to screw it up.
To determine exactly how to screw with your Omega, I need to understand what it does. If it’s running something less than a full simulation, something coarse-grained, I can exploit it: condition on a sub-pixel or off-scene detail. (The full simulation scenario is solved by indexical uncertainty.) In the epic thread no one has yet produced a demystified Omega that can’t be screwed with. Taboo “predict” and explain.