Doesn’t Newcomb’s problem remain pretty much the same if Omega is “only” able to predict your answer with 99% accuracy?
In that case, a one boxer would get a million 99% of the time, and nothing 1% of the time, and a two-boxer would get a thousand 99% of the time, and thousand and a million 1% of the time … unless you have a really weirdly shaped utility function, one-boxing still seems much better.
(I see the “omnipotence” bit a bit of a spherical cow assumption that allows to sidestep some irrelevant issues to get to the meat of the problem, but it does become important when you’re dealing with bits of code simulating each other)
If Omega is only able to predict your answer with 75% accuracy, then the expected payoff for two-boxing is:
.25 * 1001000 + .75 * 1000 = 251000
and the expected payoff for one-boxing is:
.25 * 0 + .75 * 1000000 = 750000.
So even if Omega is just a pretty good predictor, one-boxing is the way to go. (unless you really need a thousand dollars or usual concerns about money vs utility)
For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.
Now I wonder how good humans are at playing Omega.
Better than 50.5% accuracy actually doesn’t sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.
Better than 50.5% accuracy actually doesn’t sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer.
Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.
This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.
And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).
I don’t say it is impossible to specify, just that your description could be improved.
Sure, it would also be nice to know that your wearing blue shoes will not throw off Omega. In the absence of any such information (we can stipulate if need be) the analysis is correct.
Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.
As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.
Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn’t want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.
Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.
If I see Omega getting the answer right 75% of the time, I think “the clever conman makes himself look real by appearing to be constrained by real limits.” Does this make me smarter or dumber than we want a powerful AI to be?
Nobody is proposing building an AI that can’t recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.
I have seen numerous statements of Newcomb’s problem where it is stated “Omega got the answer right 100 out of 100 times before.” That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I’m not sure there is), it has been left implicit until now.
Consider the title of my discussion post. So we don’t even need a near-magical Omega to set this problem. So WTF is he doing here? Just confusing things and misleading people (at least me.)
Doesn’t Newcomb’s problem remain pretty much the same if Omega is “only” able to predict your answer with 99% accuracy?
In that case, a one boxer would get a million 99% of the time, and nothing 1% of the time, and a two-boxer would get a thousand 99% of the time, and thousand and a million 1% of the time … unless you have a really weirdly shaped utility function, one-boxing still seems much better.
(I see the “omnipotence” bit a bit of a spherical cow assumption that allows to sidestep some irrelevant issues to get to the meat of the problem, but it does become important when you’re dealing with bits of code simulating each other)
If Omega is only able to predict your answer with 75% accuracy, then the expected payoff for two-boxing is:
and the expected payoff for one-boxing is:
So even if Omega is just a pretty good predictor, one-boxing is the way to go. (unless you really need a thousand dollars or usual concerns about money vs utility)
For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.
Now I wonder how good humans are at playing Omega.
Better than 50.5% accuracy actually doesn’t sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.
Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.
Fair point, your’re right.
… and if your utility scales linearly with money up to $1,001,000, right?
Yes, that sort of thing was addressed in the parenthetical in the grandparent. It doesn’t specifically have to scale linearly.
Or if the payoffs are reduced to fall within the (approximately) linear region.
But if they are too low (say, $1.00 and $0.01) I might do things other than what gets me more money Just For The Hell Of It.
And thus was the first zero-boxer born.
Zero-boxer: “Fuck you, Omega. I won’t be your puppet!”
Omega: “Keikaku doori...”
This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.
And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).
I don’t say it is impossible to specify, just that your description could be improved.
Sure, it would also be nice to know that your wearing blue shoes will not throw off Omega. In the absence of any such information (we can stipulate if need be) the analysis is correct.
Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.
As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.
Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn’t want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.
Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.
If I see Omega getting the answer right 75% of the time, I think “the clever conman makes himself look real by appearing to be constrained by real limits.” Does this make me smarter or dumber than we want a powerful AI to be?
Nobody is proposing building an AI that can’t recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.
I have seen numerous statements of Newcomb’s problem where it is stated “Omega got the answer right 100 out of 100 times before.” That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I’m not sure there is), it has been left implicit until now.
Consider the title of my discussion post. So we don’t even need a near-magical Omega to set this problem. So WTF is he doing here? Just confusing things and misleading people (at least me.)