A “counterfactual” seems to be just any output of a model given by inputs that were not observed. That is, a counterfactual is conceptually almost identical to a prediction. Even in deterministic universes, being able to make predictions based on incomplete information is likely useful to agents, and ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world.
If we have a model that Omega’s behaviour requires that anyone choosing box B must receive 10 utility, then our counterfactuals (model outputs) should reflect that. We can of course entertain the idea that Omega doesn’t behave according to such a model, because we have more general models that we can specialize. We must have, or we couldn’t make any sense of text such as “let’s suppose Omega is programmed in such a way...”. That sentence in itself establishes a counterfactual (with a sub-model!), since I have no knowledge in reality of anyone named Omega nor of how they are programmed.
We might also have (for some reason) near-certain knowledge that Amy can’t choose box B, but that wasn’t stated as part of the initial scenario. Finding out that Amy in fact chose box A doesn’t utterly erase the ability to employ a model in which Amy chooses box B, and so asking “what would have happened if Amy chose box B” is still a question with a reasonable answer using our knowledge about Omega. A less satisfactory counterfactual question might be “what would happen if Amy chose box A and didn’t receive 5 utility”.
“And ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world”—ah, but a predictive model also requires counterfatuals.
No, prediction and counterfactuals share a common mechanism that is neutral between them.
Decision theory is about choosing possible courses of action according to their utility, which implies choosing them for, among other things, their probability. A future action is an event that has not happened yet. A past counterfactual is an event that didn’t happen.There’s a practical difference between the two, but they share a theoretical component.: “What would be the output given input Y”. Note how that verbal formulation gives no information about whether a future or state or a counterfactuals is being considered. The black box making the calculation doesn’t know whether the input its receiving represents something that will happen, or something that might have happened.
I’m puzzled that you are puzzled. JBlack’s analysis, which I completely agree with, shows how and why agents with limited information consider counterfactuals. What further problems are there? Even the issue of highly atypical agents with perfect knowledge doesn’t create that much of a problem, because they can just pretend to have less knowledge—build a simplified model—in order to expand the range of non contradictory possibilities.
A “counterfactual” seems to be just any output of a model given by inputs that were not observed. That is, a counterfactual is conceptually almost identical to a prediction. Even in deterministic universes, being able to make predictions based on incomplete information is likely useful to agents, and ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world.
If we have a model that Omega’s behaviour requires that anyone choosing box B must receive 10 utility, then our counterfactuals (model outputs) should reflect that. We can of course entertain the idea that Omega doesn’t behave according to such a model, because we have more general models that we can specialize. We must have, or we couldn’t make any sense of text such as “let’s suppose Omega is programmed in such a way...”. That sentence in itself establishes a counterfactual (with a sub-model!), since I have no knowledge in reality of anyone named Omega nor of how they are programmed.
We might also have (for some reason) near-certain knowledge that Amy can’t choose box B, but that wasn’t stated as part of the initial scenario. Finding out that Amy in fact chose box A doesn’t utterly erase the ability to employ a model in which Amy chooses box B, and so asking “what would have happened if Amy chose box B” is still a question with a reasonable answer using our knowledge about Omega. A less satisfactory counterfactual question might be “what would happen if Amy chose box A and didn’t receive 5 utility”.
“And ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world”—ah, but a predictive model also requires counterfatuals.
No, prediction and counterfactuals share a common mechanism that is neutral between them.
Decision theory is about choosing possible courses of action according to their utility, which implies choosing them for, among other things, their probability. A future action is an event that has not happened yet. A past counterfactual is an event that didn’t happen.There’s a practical difference between the two, but they share a theoretical component.: “What would be the output given input Y”. Note how that verbal formulation gives no information about whether a future or state or a counterfactuals is being considered. The black box making the calculation doesn’t know whether the input its receiving represents something that will happen, or something that might have happened.
I’m puzzled that you are puzzled. JBlack’s analysis, which I completely agree with, shows how and why agents with limited information consider counterfactuals. What further problems are there? Even the issue of highly atypical agents with perfect knowledge doesn’t create that much of a problem, because they can just pretend to have less knowledge—build a simplified model—in order to expand the range of non contradictory possibilities.