Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.
As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.
Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn’t want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.
Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.
If I see Omega getting the answer right 75% of the time, I think “the clever conman makes himself look real by appearing to be constrained by real limits.” Does this make me smarter or dumber than we want a powerful AI to be?
Nobody is proposing building an AI that can’t recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.
I have seen numerous statements of Newcomb’s problem where it is stated “Omega got the answer right 100 out of 100 times before.” That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I’m not sure there is), it has been left implicit until now.
Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.
As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.
Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn’t want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.
Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.
If I see Omega getting the answer right 75% of the time, I think “the clever conman makes himself look real by appearing to be constrained by real limits.” Does this make me smarter or dumber than we want a powerful AI to be?
Nobody is proposing building an AI that can’t recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.
I have seen numerous statements of Newcomb’s problem where it is stated “Omega got the answer right 100 out of 100 times before.” That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I’m not sure there is), it has been left implicit until now.