Errrr. The agent does not simulate anything in my argument. The agent has a “mental model” of Omega, in which Omega is a perfect simulator. It’s about representation of the problem within the agent’s mind.
In your link, Omega—the function U() - is a perfect simulator. It calls the agent function A() twice, once to get its prediction, and once for the actual decision.
The problem would work as well if the first call went not to A directly but querying the oracle whether A()=1. There are ways of predicting that aren’t simulation, and if that’s the case then your idea falls apart.
Unless Omega predicts without simulating- for instance, this formulation of UDT can be formally proved to one-box without simulating.
Errrr. The agent does not simulate anything in my argument. The agent has a “mental model” of Omega, in which Omega is a perfect simulator. It’s about representation of the problem within the agent’s mind.
In your link, Omega—the function U() - is a perfect simulator. It calls the agent function A() twice, once to get its prediction, and once for the actual decision.
The problem would work as well if the first call went not to A directly but querying the oracle whether A()=1. There are ways of predicting that aren’t simulation, and if that’s the case then your idea falls apart.