The sequences don’t discuss the nature of counterfactuals. And all accounts of anything in the vicinity of free will or decision theory that look promising to me have a nature-of-counterfactuals shaped hole in them.
Counterfactuals are, for example, discussed here. But likely you have seen that at some point, and are familiar with the Pearlian account of counterfactuals as surgery on causal graphs. Can you enlarge on what you think is lacking?
Surgery has the issue of not describing reality. So if something in the environment takes a look inside your algorithm, for example at a precursor state to that of an action, there will be a discrepancy between the edited-in counterfactual action and the precursor state, which lets the environment notice that it’s in a counterfactual and not in actuality, which lets it make absurd sacrifices in order to trick the agent about what that counterfactual would be like. The usual way to deal with this is to sidestep the issue by disallowing taking a look inside the algorithm, but in reality the problem remains. And if an Oracle in the environment recomputes everything without directly taking a look, it’s either going to notice the discrepancy, or the counterfactual needs to somehow figure out that the Oracle is doing this and also do the surgery on what it’s going to think.
Surgery and counterfactuals are operations on one’s model of reality. This is why “These counterfactuals … do not actually exist”. For practical purposes, surgery can often be carried out as well as needed for an experiment. This is what randomisation is for (a point Pearl makes explicitly).
The rest of your comment is about the complications of agents that model themselves and each other, and can even know more about another agent than that agent does about itself, and must consider the possibility that they don’t exist except as a simulation performed by another agent. This is at the very least an active research area with as yet no settled theory (or name?), and indeed is quite beyond the scope of standard causal DAGs.
There are pragmatic models of counterfactuals sufficient for scientific experiments, but there is no clear notion of what these models approximate, or what makes a good model. It’s different for actuality, where notions of both computation and physical world go into much more detail than practice can handle.
The extreme examples of how the pragmatic models of counterfactuals break down illustrate the much more general problem: instead of an Oracle that thinks about existing only in a simulation you have things like bargaining, basically most multi-player games where the players are allowed to think informally. It’s just easier to characterize what’s going on in the extreme examples.
There are pragmatic models of counterfactuals sufficient for scientific experiments, but there is no clear notion of what these models approximate, or what makes a good model.
Well, not in the sense that we do know what the fundamental ontology of counterfactuals is. But then we don’t know if the universe is fundamentally deterministic or not, which is more or less the same thing. It’s not like theres a special problem about counterfactuals thats much worse than all the other problems.
If you take a situation A and perform “surgery” on it to turn it into situation B , then situation B is not a description of the reality of situation A....but B could still happen, and might have happened. You can only map 0.0...1 % of reality, so you can’t remotely guarantee that the the conjectured situation does not occur in time or space.
Whenever you plan to do something that hasn’t been done before, you do it by applying known laws to a novel situation...for instance , sending a rocket to the moon is a novel application of Newton’s laws.
To insist that every situation is unique, and that there is no framework of laws that allow you to answer what-if questions is a basic rejection of science!
So why is that a problem? In an indeterministic universe , you automatically have real counterfactuals, in the sense that a given situation could have turned out differently....its two different ways of looking at the same fundamental fact. In a deterministic universe , you don’t get real counterfactuals, but you still can have logical counterfactuals.
The possibilities given by nondeterminism are not the counterfactuals relevant for decision making, there’s still need for counterfactuals there that are additional constructions (in a counterfactual, the probability of the state transition corresponding to the possible decision being considered is going to be high, unlike “a priori” probability of that transition, but they are the same in the actual world). Any problem with logical counterfactuals is still present in the nondeterministic case, because you can build computers out of nondeterministic components, and there is logical uncertainty about probabilities.
There is no satisfactory account of logical counterfactuals. There are mostly unprincipled constructions that don’t work very well or decision principles that try to do things without counterfactuals (even in semantic accounts), but then it becomes unclear how well they work as decision principles.
In an important sense, the possibilities given by nondeterminism are the only ones important for decision making, because without them, there is just one thing you can will and must do.
in a counterfactual, the probability of the state transition corresponding to the possible decision being considered is going to be high, unlike “a priori” probability of that transition, but they are the same in the actual world
Why? You don’t have omniscient knowledge of the world, and you don’t have perfect insight into yourself either.
Any problem with logical counterfactuals is still present in the nondeterministic case, because you can build computers out of nondeterministic components, and there is logical uncertainty about probabilities.
You need to explain why there is any problem with logical counterfactuals.
There is no satisfactory account of logical counterfactuals.
Sure there is. There isn’t an account of logical counterfactuals given 1) determinism 2) effectively omniscient knowledge of how the world works , and 3) no sandboxing, erasure of knowledge, etc.
But 1 isn’t known to be true, 2) is known to be false, and 3) is always available anyway.
The sequences don’t discuss the nature of counterfactuals. And all accounts of anything in the vicinity of free will or decision theory that look promising to me have a nature-of-counterfactuals shaped hole in them.
Counterfactuals are, for example, discussed here. But likely you have seen that at some point, and are familiar with the Pearlian account of counterfactuals as surgery on causal graphs. Can you enlarge on what you think is lacking?
Surgery has the issue of not describing reality. So if something in the environment takes a look inside your algorithm, for example at a precursor state to that of an action, there will be a discrepancy between the edited-in counterfactual action and the precursor state, which lets the environment notice that it’s in a counterfactual and not in actuality, which lets it make absurd sacrifices in order to trick the agent about what that counterfactual would be like. The usual way to deal with this is to sidestep the issue by disallowing taking a look inside the algorithm, but in reality the problem remains. And if an Oracle in the environment recomputes everything without directly taking a look, it’s either going to notice the discrepancy, or the counterfactual needs to somehow figure out that the Oracle is doing this and also do the surgery on what it’s going to think.
Surgery and counterfactuals are operations on one’s model of reality. This is why “These counterfactuals … do not actually exist”. For practical purposes, surgery can often be carried out as well as needed for an experiment. This is what randomisation is for (a point Pearl makes explicitly).
The rest of your comment is about the complications of agents that model themselves and each other, and can even know more about another agent than that agent does about itself, and must consider the possibility that they don’t exist except as a simulation performed by another agent. This is at the very least an active research area with as yet no settled theory (or name?), and indeed is quite beyond the scope of standard causal DAGs.
There are pragmatic models of counterfactuals sufficient for scientific experiments, but there is no clear notion of what these models approximate, or what makes a good model. It’s different for actuality, where notions of both computation and physical world go into much more detail than practice can handle.
The extreme examples of how the pragmatic models of counterfactuals break down illustrate the much more general problem: instead of an Oracle that thinks about existing only in a simulation you have things like bargaining, basically most multi-player games where the players are allowed to think informally. It’s just easier to characterize what’s going on in the extreme examples.
Well, not in the sense that we do know what the fundamental ontology of counterfactuals is. But then we don’t know if the universe is fundamentally deterministic or not, which is more or less the same thing. It’s not like theres a special problem about counterfactuals thats much worse than all the other problems.
If you take a situation A and perform “surgery” on it to turn it into situation B , then situation B is not a description of the reality of situation A....but B could still happen, and might have happened. You can only map 0.0...1 % of reality, so you can’t remotely guarantee that the the conjectured situation does not occur in time or space.
Whenever you plan to do something that hasn’t been done before, you do it by applying known laws to a novel situation...for instance , sending a rocket to the moon is a novel application of Newton’s laws.
To insist that every situation is unique, and that there is no framework of laws that allow you to answer what-if questions is a basic rejection of science!
Kennaway: There is no conflict between determinism and counterfactuals.
Yudkowsky: These counterfactuals are untestable, unobservable, and do not actually exist
Me: choose one!
We can and do test counterfactuals by by re-running experiments with different starting conditions. The claim that …
...is profoundly counter-scientific.
So why is that a problem? In an indeterministic universe , you automatically have real counterfactuals, in the sense that a given situation could have turned out differently....its two different ways of looking at the same fundamental fact. In a deterministic universe , you don’t get real counterfactuals, but you still can have logical counterfactuals.
What’s the actual problem?
The possibilities given by nondeterminism are not the counterfactuals relevant for decision making, there’s still need for counterfactuals there that are additional constructions (in a counterfactual, the probability of the state transition corresponding to the possible decision being considered is going to be high, unlike “a priori” probability of that transition, but they are the same in the actual world). Any problem with logical counterfactuals is still present in the nondeterministic case, because you can build computers out of nondeterministic components, and there is logical uncertainty about probabilities.
There is no satisfactory account of logical counterfactuals. There are mostly unprincipled constructions that don’t work very well or decision principles that try to do things without counterfactuals (even in semantic accounts), but then it becomes unclear how well they work as decision principles.
In an important sense, the possibilities given by nondeterminism are the only ones important for decision making, because without them, there is just one thing you can will and must do.
Why? You don’t have omniscient knowledge of the world, and you don’t have perfect insight into yourself either.
You need to explain why there is any problem with logical counterfactuals.
Sure there is. There isn’t an account of logical counterfactuals given 1) determinism 2) effectively omniscient knowledge of how the world works , and 3) no sandboxing, erasure of knowledge, etc.
But 1 isn’t known to be true, 2) is known to be false, and 3) is always available anyway.