Just to clarify, I think your analysis here doesn’t apply to the transparent-boxes version that I presented in Good and Real. There, the predictor’s task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees $1M in the box. (That is, the predictor simulates
what—according to physics—the agent’s configuration would do, if presented with the $1M environment; or equivalently, what the agent’s ‘source code’ returns if called with the $1M argument.)
If the agent would one-box if $1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.
Interesting. This would seem to return it to the class of decision-determined problems, and for an illuminating reason—the algorithm is only run with one set of information—just like how in Newcomb’s problem the algorithm has only one set of information no matter the contents of the boxes.
This way of thinking makes Vladimir’s position more intuitive. To put words in his mouth, instead of being not decision determined, the “unfixed” version is merely two-decision determined, and then left undefined for half the bloody problem.
Just to clarify, I think your analysis here doesn’t apply to the transparent-boxes version that I presented in Good and Real. There, the predictor’s task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees $1M in the box. (That is, the predictor simulates what—according to physics—the agent’s configuration would do, if presented with the $1M environment; or equivalently, what the agent’s ‘source code’ returns if called with the $1M argument.)
If the agent would one-box if $1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.
Interesting. This would seem to return it to the class of decision-determined problems, and for an illuminating reason—the algorithm is only run with one set of information—just like how in Newcomb’s problem the algorithm has only one set of information no matter the contents of the boxes.
This way of thinking makes Vladimir’s position more intuitive. To put words in his mouth, instead of being not decision determined, the “unfixed” version is merely two-decision determined, and then left undefined for half the bloody problem.
That’s not essential, though (see the dual-simulation variant in Good and Real).
Well, yeah, so long as all the decisions have defined responses.