Well, you can’t simulate it because the mechanism of prediction is unspecified, as is the mechanism of free will that makes the decision. You just don’t know if, in the thought experiment universe, you actually have an open option to choose.
You can very easily simulate the trivial case (ignore causality and decision theory, assume Omega cheats by changing the values after you decide before the result is revealed), which leads to one-boxing. Or the trivial-but-scenario-rejecting trivial case of the CDT assumption that your choice has literally no impact on the boxes which leads to two-boxing.
Well, you can’t simulate it because the mechanism of prediction is unspecified, as is the mechanism of free will that makes the decision. You just don’t know if, in the thought experiment universe, you actually have an open option to choose.
You can very easily simulate the trivial case (ignore causality and decision theory, assume Omega cheats by changing the values after you decide before the result is revealed), which leads to one-boxing. Or the trivial-but-scenario-rejecting trivial case of the CDT assumption that your choice has literally no impact on the boxes which leads to two-boxing.