Thank you, that is helpful. I still have a slight problem with it, though. In the classic Newcomb’s problem, I’m in a state of uncertainty about Omega’s prediction. Only when I actually pick up either one box or two can I say with confidence what Omega did. At the moment that I pick up Box B, I do know that I am leaving behind $1000 in Box A. At this point, I might be tempted to think that I should grab that box as well, since I already “know” what’s inside of it. The problem is that Omega probably predicted that temptation. Because I don’t know Omega’s decision while I’m considering the problem, I can’t hope to outsmart it.
I would argue, though, that getting $1,001,000 out of Newcomb’s problem is better than getting $1,000,000. If there’s a way to make that happen, a rational agent should pursue it. This is only really possible if you can outsmart Omega, which does seem like a very difficult challenge. It’s really only possible if you can think one level further than Omega. In classic Newcomb’s, you have to presume that Omega is predicting every thought you have and thinking ahead of you, so you can’t ever assume that you know what Omega will do, because Omega knows that you will assume that and do differently. In transparent Newcomb’s, however, we can know what Omega has done, and so we have a chance to outsmart it.
Obviously, if we are anticipating being faced with this problem, we can decide to agree to only take one box, so that Omega fills it up with $1,000,000, but that’s not what transparent Newcomb’s is asking. In transparent Newcomb’s, an alien flies up to you and drops off two transparent boxes that contain between them $1,001,000. It doesn’t matter to me what algorithm Omega used to decide to do this. Rationalists should win. If I can outsmart Omega, and I have an opportunity to on transparent Newcomb’s, I should do it.
Thank you, that is helpful. I still have a slight problem with it, though. In the classic Newcomb’s problem, I’m in a state of uncertainty about Omega’s prediction. Only when I actually pick up either one box or two can I say with confidence what Omega did. At the moment that I pick up Box B, I do know that I am leaving behind $1000 in Box A. At this point, I might be tempted to think that I should grab that box as well, since I already “know” what’s inside of it. The problem is that Omega probably predicted that temptation. Because I don’t know Omega’s decision while I’m considering the problem, I can’t hope to outsmart it.
I would argue, though, that getting $1,001,000 out of Newcomb’s problem is better than getting $1,000,000. If there’s a way to make that happen, a rational agent should pursue it. This is only really possible if you can outsmart Omega, which does seem like a very difficult challenge. It’s really only possible if you can think one level further than Omega. In classic Newcomb’s, you have to presume that Omega is predicting every thought you have and thinking ahead of you, so you can’t ever assume that you know what Omega will do, because Omega knows that you will assume that and do differently. In transparent Newcomb’s, however, we can know what Omega has done, and so we have a chance to outsmart it.
Obviously, if we are anticipating being faced with this problem, we can decide to agree to only take one box, so that Omega fills it up with $1,000,000, but that’s not what transparent Newcomb’s is asking. In transparent Newcomb’s, an alien flies up to you and drops off two transparent boxes that contain between them $1,001,000. It doesn’t matter to me what algorithm Omega used to decide to do this. Rationalists should win. If I can outsmart Omega, and I have an opportunity to on transparent Newcomb’s, I should do it.