Can Counterfactuals Be True?
Followup to: Probability is Subjectively Objective
The classic explanation of counterfactuals begins with this distinction:
If Lee Harvey Oswald didn’t shoot John F. Kennedy, then someone else did.
If Lee Harvey Oswald hadn’t shot John F. Kennedy, someone else would have.
In ordinary usage we would agree with the first statement, but not the second (I hope).
If, somehow, we learn the definite fact that Oswald did not shoot Kennedy, then someone else must have done so, since Kennedy was in fact shot.
But if we went back in time and removed Oswald, while leaving everything else the same, then—unless you believe there was a conspiracy—there’s no particular reason to believe Kennedy would be shot:
We start by imagining the same historical situation that existed in 1963—by a further act of imagination, we remove Oswald from our vision—we run forward the laws that we think govern the world—visualize Kennedy parading through in his limousine—and find that, in our imagination, no one shoots Kennedy.
It’s an interesting question whether counterfactuals can be true or false. We never get to experience them directly.
If we disagree on what would have happened if Oswald hadn’t been there, what experiment could we perform to find out which of us is right?
And if the counterfactual is something unphysical—like, “If gravity had stopped working three days ago, the Sun would have exploded”—then there aren’t even any alternate histories out there to provide a truth-value.
It’s not as simple as saying that if the bucket contains three pebbles, and the pasture contains three sheep, the bucket is true.
Since the counterfactual event only exists in your imagination, how can it be true or false?
So… is it just as fair to say that “If Oswald hadn’t shot Kennedy, the Sun would have exploded”?
After all, the event only exists in our imaginations—surely that means it’s subjective, so we can say anything we like?
But so long as we have a lawful specification of how counterfactuals are constructed—a lawful computational procedure—then the counterfactual result of removing Oswald, depends entirely on the empirical state of the world.
If there was no conspiracy, then any reasonable computational procedure that simulates removing Oswald’s bullet from the course of history, ought to return an answer of Kennedy not getting shot.
“Reasonable!” you say. “Ought!” you say.
But that’s not the point; the point is that if you do pick some fixed computational procedure, whether it is reasonable or not, then either it will say that Kennedy gets shot, or not, and what it says will depend on the empirical state of the world. So that, if you tell me, “I believe that this-and-such counterfactual construal, run over Oswald’s removal, preserves Kennedy’s life”, then I can deduce that you don’t believe in the conspiracy.
Indeed, so long as we take this computational procedure as fixed, then the actual state of the world (which either does include a conspiracy, or does not) presents a ready truth-value for the output of the counterfactual.
In general, if you give me a fixed computational procedure, like “multiply by 7 and add 5”, and then you point to a 6-sided die underneath a cup, and say, “The result-of-procedure is 26!” then it’s not hard at all to assign a truth value to this statement. Even if the actual die under the cup only ever takes on the values between 1 and 6, so that “26” is not found anywhere under the cup. The statement is still true if and only if the die is showing 3; that is its empirical truth-condition.
And what about the statement ((3 * 7) + 5) = 26? Where is the truth-condition for that statement located? This I don’t know; but I am nonetheless quite confident that it is true. Even though I am not confident that this ‘true’ means exactly the same thing as the ‘true’ in “the bucket is ‘true’ when it contains the same number of pebbles as sheep in the pasture”.
So if someone I trust—presumably someone I really trust—tells me, “If Oswald hadn’t shot Kennedy, someone else would have”, and I believe this statement, then I believe the empirical reality is such as to make the counterfactual computation come out this way. Which would seem to imply the conspiracy. And I will anticipate accordingly.
Or if I find out that there was a conspiracy, then this will confirm the truth-condition of the counterfactual—which might make a bit more sense than saying, “Confirm that the counterfactual is true.”
But how do you actually compute a counterfactual? For this you must consult Judea Pearl. Roughly speaking, you perform surgery on graphical models of causal processes; you sever some variables from their ordinary parents and surgically set them to new values, and then recalculate the probability distribution.
There are other ways of defining counterfactuals, but I confess they all strike me as entirely odd. Even worse, you have philosophers arguing over what the value of a counterfactual really is or really means, as if there were some counterfactual world actually floating out there in the philosophical void. If you think I’m attacking a strawperson here, I invite you to consult the philosophical literature on Newcomb’s Problem.
A lot of philosophy seems to me to suffer from “naive philosophical realism”—the belief that philosophical debates are about things that automatically and directly exist as propertied objects floating out there in the void.
You can talk about an ideal computation, or an ideal process, that would ideally be applied to the empirical world. You can talk about your uncertain beliefs about the output of this ideal computation, or the result of the ideal process.
So long as the computation is fixed, and so long as the computational itself is only over actually existent things. Or the results of other computations previously defined—you should not have your computation be over “nearby possible worlds” unless you can tell me how to compute those, as well.
A chief sign of naive philosophical realism is that it does not tell you how to write a computer program that computes the objects of its discussion.
I have yet to see a camera that peers into “nearby possible worlds”—so even after you’ve analyzed counterfactuals in terms of “nearby possible worlds”, I still can’t write an AI that computes counterfactuals.
But Judea Pearl tells me just how to compute a counterfactual, given only my beliefs about the actual world.
I strongly privilege the real world that actually exists, and to a slightly lesser degree, logical truths about mathematical objects (preferably finite ones). Anything else you want to talk about, I need to figure out how to describe in terms of the first two—for example, as the output of an ideal computation run over the empirical state of the real universe.
The absence of this requirement as a condition, or at least a goal, of modern philosophy, is one of the primary reasons why modern philosophy is often surprisingly useless in my AI work. I’ve read whole books about decision theory that take counterfactual distributions as givens, and never tell you how to compute the counterfactuals.
Oh, and to talk about “the probability that John F. Kennedy was shot, given that Lee Harvey Oswald didn’t shoot him”, we write:
And to talk about “the probability that John F. Kennedy would have been shot, if Lee Harvey Oswald hadn’t shot him”, we write:
P(Oswald_not -> Kennedy_shot)
That little symbol there is supposed to be a box with an arrow coming out of it, but I don’t think Unicode has it.
Part of The Metaethics Sequence
Next post: “Math is Subjunctively Objective”
Previous post: “Existential Angst Factory”