The first intuition is that the counterfactual involves changes the physical result of your decision making, not the process of your decision making itself. The second intuition is that the counterfactual involves a replacement of the process of your decision making such that you’d take another action than you would normally do.
Hm, this makes me realize I’m not fully sure what’s meant by “counterfactual” here.
I normally thinking of it as, like. I’m looking at a world history, e.g. with variables A and B and times t=0,1,2 and some relationships between them. And I say “at t=1, A took value a. What if at t=1, I had taken value a′ instead? What would that change at t=2?” It’s clear how to fit decisions I’ve made in the past into that framework.
Or I can run it forwards, looking from t=0 to t=1,2, imagining what’s going to happen by default, and imagining what happens if I make a change at some point. It’s less clear how to fit my own decisions into this framework, because what does “by default” mean then? But I can just pick some decision to plug in at every point where I get to make one, and say that all of these picks give me a counterfactual. (And perhaps by extension, if there are no decision points, I should also consider the imagined “what’s going to happen by default” world to be a counterfactual.)
But if the discussion of counterfactuals starts by talking about decisions I’ve made, or are going to make, then it’s not clear to me whether it can be extended to talk about general interventions on world histories.
I think that the first intuition corresponds to “interventions on causal models using the do operator”. That’s something I don’t think I understand deeply, but I do think I get the basics of, like, “what is this field of study trying to do, what questions is it asking, what sorts of objects does it work with and how do we manipulate them”. (E.g. if this is what we’re doing, then we say “we’re allowed to just set A=a′ at t=1, we don’t need to go back to t=0 and figure out how that state of affairs could have come about”.)
Does the second intuition correspond to something that we can talk about without talking about my decisions? (And if so, is it a different thing than the first intuition? Or is it, like, they both naturally extend to a world with no decision points for me, but the way they extend to that is the same in those worlds, and so they only differ in worlds that do have decision points for me?)
Hm, this makes me realize I’m not fully sure what’s meant by “counterfactual” here.
I normally thinking of it as, like. I’m looking at a world history, e.g. with variables A and B and times t=0,1,2 and some relationships between them. And I say “at t=1, A took value a. What if at t=1, I had taken value a′ instead? What would that change at t=2?” It’s clear how to fit decisions I’ve made in the past into that framework.
Or I can run it forwards, looking from t=0 to t=1,2, imagining what’s going to happen by default, and imagining what happens if I make a change at some point. It’s less clear how to fit my own decisions into this framework, because what does “by default” mean then? But I can just pick some decision to plug in at every point where I get to make one, and say that all of these picks give me a counterfactual. (And perhaps by extension, if there are no decision points, I should also consider the imagined “what’s going to happen by default” world to be a counterfactual.)
But if the discussion of counterfactuals starts by talking about decisions I’ve made, or are going to make, then it’s not clear to me whether it can be extended to talk about general interventions on world histories.
I think that the first intuition corresponds to “interventions on causal models using the do operator”. That’s something I don’t think I understand deeply, but I do think I get the basics of, like, “what is this field of study trying to do, what questions is it asking, what sorts of objects does it work with and how do we manipulate them”. (E.g. if this is what we’re doing, then we say “we’re allowed to just set A=a′ at t=1, we don’t need to go back to t=0 and figure out how that state of affairs could have come about”.)
Does the second intuition correspond to something that we can talk about without talking about my decisions? (And if so, is it a different thing than the first intuition? Or is it, like, they both naturally extend to a world with no decision points for me, but the way they extend to that is the same in those worlds, and so they only differ in worlds that do have decision points for me?)