Questions in decision theory are not questions about what choices you should make with some sort of unpredictable free will. They are questions about what type of source code you should be running.
indeed. It is actually worse than that, because the “should be” part is already in your source code. In the setup
A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did.
it is meaningless to ask
What box should you choose?
since the answer is already set (known to the long-dead predictor). A self-consistent question is “what did the predictor predict that you will do?” Anything else would be sneaking the free will into the setup.
indeed. It is actually worse than that, because the “should be” part is already in your source code. In the setup
it is meaningless to ask
since the answer is already set (known to the long-dead predictor). A self-consistent question is “what did the predictor predict that you will do?” Anything else would be sneaking the free will into the setup.