Problem 1: Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. “Before you entered the room, I ran a simulation of this problem as presented to an agent running TDT. I won’t tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put $1 million in Box B. Regardless of how the simulated agent decided, I put $1000 in Box A. Now please choose your box or boxes.”
This is indeed a problem—and one I would describe as the general class “dealing with other agents who are fucking with you.” It is not one that can be solved and I believe a “correct” decision theory will, in fact, lose (compared to CDT) in this case.
Note that there seems to be some chance that I am confused in a way analogous to the way that people who believe “Two boxing on Newcomb’s is rational” are confused. There could be a deep insight I am missing. This seems comparatively unlikely.
This is indeed a problem—and one I would describe as the general class “dealing with other agents who are fucking with you.” It is not one that can be solved and I believe a “correct” decision theory will, in fact, lose (compared to CDT) in this case.
Note that there seems to be some chance that I am confused in a way analogous to the way that people who believe “Two boxing on Newcomb’s is rational” are confused. There could be a deep insight I am missing. This seems comparatively unlikely.