I feel that there’s another issue inherent in the formulation of the problem that this post doesn’t fully address.
The way you formulated the problem, the predictor is asked for a prediction, and then the tennis player looks at the prediction partway through the match. Since we have specified that the prediction will influence the tennis player’s motivation, we are basically in a situation where the predictor’s output will affect the outcome of the game. In that kind of a decision, where the outcome of the game is dependent on the predictor’s prediction, it’s not obvious to me that outputting a “manipulative” prediction is actually wrong… since no matter what the predictor chooses to output will end up influencing the world.
Compare this to a situation where you ask me to predict whether you’ll take box A or box B, and upon hearing my prediction, you will make the choice that I said was my prediction of your choice. Here there’s no natural “non-manipulative” choice: your decision is fully determined by mine, and I can’t not influence it.
The example here is not quite as blatant, but I think it still follows the same principle: the intuitive notion of “predict what’s going to happen next” has an undefined case of what a “correct, non-manipulative prediction” means if there’s a causal arrow from the prediction to the outcome.
I feel that there’s another issue inherent in the formulation of the problem that this post doesn’t fully address.
The way you formulated the problem, the predictor is asked for a prediction, and then the tennis player looks at the prediction partway through the match. Since we have specified that the prediction will influence the tennis player’s motivation, we are basically in a situation where the predictor’s output will affect the outcome of the game. In that kind of a decision, where the outcome of the game is dependent on the predictor’s prediction, it’s not obvious to me that outputting a “manipulative” prediction is actually wrong… since no matter what the predictor chooses to output will end up influencing the world.
Compare this to a situation where you ask me to predict whether you’ll take box A or box B, and upon hearing my prediction, you will make the choice that I said was my prediction of your choice. Here there’s no natural “non-manipulative” choice: your decision is fully determined by mine, and I can’t not influence it.
The example here is not quite as blatant, but I think it still follows the same principle: the intuitive notion of “predict what’s going to happen next” has an undefined case of what a “correct, non-manipulative prediction” means if there’s a causal arrow from the prediction to the outcome.