No, there is no way to write a simulation that supports taking both boxes while also upholding the conditions of the scenario.
Even with an imperfect predictor, you would have to make the predictor effectively useless at predicting, performing no better than 0.1% above chance. Even if it predicts some agents well and some poorly, you would need to p-hack the result by ignoring the agents for which it predicted better to get a recommendation to take both boxes.
I wonder if there can be a way to prove that anything that can not be simulated is not possible. I think it should be easier than it seems, because we can make a lot of excuses—like we can take any amount of time and any powerful computer. But if we prove that if even with near infinite amount of compute and infinite time, we can’t simulate a scenario, does that make this scenario impossible?
Not that it would be immediately practical, because we do not have nearly infinite compute or time, but it could be interesting.
The problem isn’t in the simulation part, but in the “supports” part.
You can certainly write a simulation in which an agent decides to take both boxes. By the conditions of the scenario, they get $1000. Does this simulation “support” taking both boxes? No, unless you’re only comparing with alternative actions of not taking a box at all, or burning box B and taking the ashes, or other things that are worse than getting $1000.
However, the scenario states that the agent could take 1 box, and it is a logical consequence of the scenario setup that that in the situations where they do, they get $1000000. That’s better than getting $1000 under the assumptions of the scenario, and so a simulation that actually follows the rules of the scenario cannot support taking two boxes.
No, there is no way to write a simulation that supports taking both boxes while also upholding the conditions of the scenario.
Even with an imperfect predictor, you would have to make the predictor effectively useless at predicting, performing no better than 0.1% above chance. Even if it predicts some agents well and some poorly, you would need to p-hack the result by ignoring the agents for which it predicted better to get a recommendation to take both boxes.
I wonder if there can be a way to prove that anything that can not be simulated is not possible. I think it should be easier than it seems, because we can make a lot of excuses—like we can take any amount of time and any powerful computer. But if we prove that if even with near infinite amount of compute and infinite time, we can’t simulate a scenario, does that make this scenario impossible?
Not that it would be immediately practical, because we do not have nearly infinite compute or time, but it could be interesting.
The problem isn’t in the simulation part, but in the “supports” part.
You can certainly write a simulation in which an agent decides to take both boxes. By the conditions of the scenario, they get $1000. Does this simulation “support” taking both boxes? No, unless you’re only comparing with alternative actions of not taking a box at all, or burning box B and taking the ashes, or other things that are worse than getting $1000.
However, the scenario states that the agent could take 1 box, and it is a logical consequence of the scenario setup that that in the situations where they do, they get $1000000. That’s better than getting $1000 under the assumptions of the scenario, and so a simulation that actually follows the rules of the scenario cannot support taking two boxes.