Surgery has the issue of not describing reality. So if something in the environment takes a look inside your algorithm, for example at a precursor state to that of an action, there will be a discrepancy between the edited-in counterfactual action and the precursor state, which lets the environment notice that it’s in a counterfactual and not in actuality, which lets it make absurd sacrifices in order to trick the agent about what that counterfactual would be like. The usual way to deal with this is to sidestep the issue by disallowing taking a look inside the algorithm, but in reality the problem remains. And if an Oracle in the environment recomputes everything without directly taking a look, it’s either going to notice the discrepancy, or the counterfactual needs to somehow figure out that the Oracle is doing this and also do the surgery on what it’s going to think.
Surgery and counterfactuals are operations on one’s model of reality. This is why “These counterfactuals … do not actually exist”. For practical purposes, surgery can often be carried out as well as needed for an experiment. This is what randomisation is for (a point Pearl makes explicitly).
The rest of your comment is about the complications of agents that model themselves and each other, and can even know more about another agent than that agent does about itself, and must consider the possibility that they don’t exist except as a simulation performed by another agent. This is at the very least an active research area with as yet no settled theory (or name?), and indeed is quite beyond the scope of standard causal DAGs.
There are pragmatic models of counterfactuals sufficient for scientific experiments, but there is no clear notion of what these models approximate, or what makes a good model. It’s different for actuality, where notions of both computation and physical world go into much more detail than practice can handle.
The extreme examples of how the pragmatic models of counterfactuals break down illustrate the much more general problem: instead of an Oracle that thinks about existing only in a simulation you have things like bargaining, basically most multi-player games where the players are allowed to think informally. It’s just easier to characterize what’s going on in the extreme examples.
There are pragmatic models of counterfactuals sufficient for scientific experiments, but there is no clear notion of what these models approximate, or what makes a good model.
Well, not in the sense that we do know what the fundamental ontology of counterfactuals is. But then we don’t know if the universe is fundamentally deterministic or not, which is more or less the same thing. It’s not like theres a special problem about counterfactuals thats much worse than all the other problems.
If you take a situation A and perform “surgery” on it to turn it into situation B , then situation B is not a description of the reality of situation A....but B could still happen, and might have happened. You can only map 0.0...1 % of reality, so you can’t remotely guarantee that the the conjectured situation does not occur in time or space.
Whenever you plan to do something that hasn’t been done before, you do it by applying known laws to a novel situation...for instance , sending a rocket to the moon is a novel application of Newton’s laws.
To insist that every situation is unique, and that there is no framework of laws that allow you to answer what-if questions is a basic rejection of science!
Surgery has the issue of not describing reality. So if something in the environment takes a look inside your algorithm, for example at a precursor state to that of an action, there will be a discrepancy between the edited-in counterfactual action and the precursor state, which lets the environment notice that it’s in a counterfactual and not in actuality, which lets it make absurd sacrifices in order to trick the agent about what that counterfactual would be like. The usual way to deal with this is to sidestep the issue by disallowing taking a look inside the algorithm, but in reality the problem remains. And if an Oracle in the environment recomputes everything without directly taking a look, it’s either going to notice the discrepancy, or the counterfactual needs to somehow figure out that the Oracle is doing this and also do the surgery on what it’s going to think.
Surgery and counterfactuals are operations on one’s model of reality. This is why “These counterfactuals … do not actually exist”. For practical purposes, surgery can often be carried out as well as needed for an experiment. This is what randomisation is for (a point Pearl makes explicitly).
The rest of your comment is about the complications of agents that model themselves and each other, and can even know more about another agent than that agent does about itself, and must consider the possibility that they don’t exist except as a simulation performed by another agent. This is at the very least an active research area with as yet no settled theory (or name?), and indeed is quite beyond the scope of standard causal DAGs.
There are pragmatic models of counterfactuals sufficient for scientific experiments, but there is no clear notion of what these models approximate, or what makes a good model. It’s different for actuality, where notions of both computation and physical world go into much more detail than practice can handle.
The extreme examples of how the pragmatic models of counterfactuals break down illustrate the much more general problem: instead of an Oracle that thinks about existing only in a simulation you have things like bargaining, basically most multi-player games where the players are allowed to think informally. It’s just easier to characterize what’s going on in the extreme examples.
Well, not in the sense that we do know what the fundamental ontology of counterfactuals is. But then we don’t know if the universe is fundamentally deterministic or not, which is more or less the same thing. It’s not like theres a special problem about counterfactuals thats much worse than all the other problems.
If you take a situation A and perform “surgery” on it to turn it into situation B , then situation B is not a description of the reality of situation A....but B could still happen, and might have happened. You can only map 0.0...1 % of reality, so you can’t remotely guarantee that the the conjectured situation does not occur in time or space.
Whenever you plan to do something that hasn’t been done before, you do it by applying known laws to a novel situation...for instance , sending a rocket to the moon is a novel application of Newton’s laws.
To insist that every situation is unique, and that there is no framework of laws that allow you to answer what-if questions is a basic rejection of science!