Usually, in the thought experiment, we assume that Omega has enough computation power to simulate the agent, but that the agent does not have enough computation power to compute Omega. We usually further assume that the agent halts and that Omega is a perfect predictor. However, these are expositional simplifications, and none of these assumptions are necessary in order to put the agent into a Newcomblike scenario.
For example, in the game nshepperd is describing (where Omega plays Newcomb’s problem, but only puts the money in the box if it has very high confidence that you will one-box) then, if you try to simulate Omega, you won’t get the money. You’re still welcome to simulate Omega, but while you’re doing that, I’ll be walking away with a million dollars and you’ll be spending lots of money on computing resources.
No one’s saying you can’t, they’re just saying that if you find yourself in a situation where someone is predicting you and rewarding you for obviously acting like they want you to, and you know this, then it behooves you to obviously act like they want you to.
Or to put it another way, consider a game where Omega is only a pretty good predictor who only puts the money in the box if Omega predicts that you one-box unconditionally (e.g. without using a source of randomness) and whose predictions are correct 99% of the time. Omega here doesn’t have any perfect knowledge, and we’re not necessarily assuming that anyone has superpowers, but i’d still onebox.
Or if you want to see a more realistic problem (where the predictor has only human-level accuracy) then check out Hintze’s formulation of Parfit’s Hitchhiker (though be warned, I’m pretty sure he’s wrong about TDT succeeding on this formulation of Parfit’s Hitchhiker. UDT succeeds on this problem, but TDT would fail.)
Usually, in the thought experiment, we assume that Omega has enough computation power to simulate the agent, but that the agent does not have enough computation power to compute Omega. We usually further assume that the agent halts and that Omega is a perfect predictor. However, these are expositional simplifications, and none of these assumptions are necessary in order to put the agent into a Newcomblike scenario.
For example, in the game nshepperd is describing (where Omega plays Newcomb’s problem, but only puts the money in the box if it has very high confidence that you will one-box) then, if you try to simulate Omega, you won’t get the money. You’re still welcome to simulate Omega, but while you’re doing that, I’ll be walking away with a million dollars and you’ll be spending lots of money on computing resources.
No one’s saying you can’t, they’re just saying that if you find yourself in a situation where someone is predicting you and rewarding you for obviously acting like they want you to, and you know this, then it behooves you to obviously act like they want you to.
Or to put it another way, consider a game where Omega is only a pretty good predictor who only puts the money in the box if Omega predicts that you one-box unconditionally (e.g. without using a source of randomness) and whose predictions are correct 99% of the time. Omega here doesn’t have any perfect knowledge, and we’re not necessarily assuming that anyone has superpowers, but i’d still onebox.
Or if you want to see a more realistic problem (where the predictor has only human-level accuracy) then check out Hintze’s formulation of Parfit’s Hitchhiker (though be warned, I’m pretty sure he’s wrong about TDT succeeding on this formulation of Parfit’s Hitchhiker. UDT succeeds on this problem, but TDT would fail.)