Now if we explain the situation to the inside human, they may not be quite so callous. Instead they might reason “If I don’t take the small box, there is a good chance that a ‘real’ human on the outside will then get $10,000. That looks like a good deal, so I’m happy to walk away with nothing.”
Put differently, when we see an empty box we might not conclude that predictor didn’t fill the box. Instead, we might consider the possibility that we are living inside the predictor’s imagination, being presented with a hypothetical that need not have any relationship to what’s going on out there in the real world.
When trying to make the altruistically best decision given that I’m being simulated, shouldn’t I also consider the possibility that the predictor is simulating me in order to decide how to fill the boxes in some kind transparent Anti-Newcomb problem, where the $10,000 dollars is there if and only if it predicts you would take the $1,000 in transparent Newcomb? In that case I’d do the best thing by the real version of me by two-boxing.
This sounds a bit silly but I guess I’m making the point that ‘choose your action altruistically factoring in the possibility that you’re in a simulation’ requires not just a prior on whether you’re in a simulation, but also a prior on the causal link between the simulation and the real world.
If I’m being simulated in a situation which purportedly involves a simulation of me in that exact situation, should I assume that the purpose of my being simulated is to play the role of the simulation in this situation? Is that always anthropically more likely than that I’m being simulated for a different reason?
When trying to make the altruistically best decision given that I’m being simulated, shouldn’t I also consider the possibility that the predictor is simulating me in order to decide how to fill the boxes in some kind transparent Anti-Newcomb problem, where the $10,000 dollars is there if and only if it predicts you would take the $1,000 in transparent Newcomb? In that case I’d do the best thing by the real version of me by two-boxing.
This sounds a bit silly but I guess I’m making the point that ‘choose your action altruistically factoring in the possibility that you’re in a simulation’ requires not just a prior on whether you’re in a simulation, but also a prior on the causal link between the simulation and the real world.
If I’m being simulated in a situation which purportedly involves a simulation of me in that exact situation, should I assume that the purpose of my being simulated is to play the role of the simulation in this situation? Is that always anthropically more likely than that I’m being simulated for a different reason?