Can Counterfactuals Be True?

Fol­lowup to: Prob­a­bil­ity is Sub­jec­tively Objective

The clas­sic ex­pla­na­tion of coun­ter­fac­tu­als be­gins with this dis­tinc­tion:

  1. If Lee Har­vey Oswald didn’t shoot John F. Kennedy, then some­one else did.

  2. If Lee Har­vey Oswald hadn’t shot John F. Kennedy, some­one else would have.

In or­di­nary us­age we would agree with the first state­ment, but not the sec­ond (I hope).

If, some­how, we learn the definite fact that Oswald did not shoot Kennedy, then some­one else must have done so, since Kennedy was in fact shot.

But if we went back in time and re­moved Oswald, while leav­ing ev­ery­thing else the same, then—un­less you be­lieve there was a con­spir­acy—there’s no par­tic­u­lar rea­son to be­lieve Kennedy would be shot:

We start by imag­in­ing the same his­tor­i­cal situ­a­tion that ex­isted in 1963—by a fur­ther act of imag­i­na­tion, we re­move Oswald from our vi­sion—we run for­ward the laws that we think gov­ern the world—vi­su­al­ize Kennedy parad­ing through in his li­mou­sine—and find that, in our imag­i­na­tion, no one shoots Kennedy.

It’s an in­ter­est­ing ques­tion whether coun­ter­fac­tu­als can be true or false. We never get to ex­pe­rience them di­rectly.

If we dis­agree on what would have hap­pened if Oswald hadn’t been there, what ex­per­i­ment could we perform to find out which of us is right?

And if the coun­ter­fac­tual is some­thing un­phys­i­cal—like, “If grav­ity had stopped work­ing three days ago, the Sun would have ex­ploded”—then there aren’t even any al­ter­nate his­to­ries out there to provide a truth-value.

It’s not as sim­ple as say­ing that if the bucket con­tains three peb­bles, and the pas­ture con­tains three sheep, the bucket is true.

Since the coun­ter­fac­tual event only ex­ists in your imag­i­na­tion, how can it be true or false?

So… is it just as fair to say that “If Oswald hadn’t shot Kennedy, the Sun would have ex­ploded”?

After all, the event only ex­ists in our imag­i­na­tions—surely that means it’s sub­jec­tive, so we can say any­thing we like?

But so long as we have a lawful speci­fi­ca­tion of how coun­ter­fac­tu­als are con­structed—a lawful com­pu­ta­tional pro­ce­dure—then the coun­ter­fac­tual re­sult of re­mov­ing Oswald, de­pends en­tirely on the em­piri­cal state of the world.

If there was no con­spir­acy, then any rea­son­able com­pu­ta­tional pro­ce­dure that simu­lates re­mov­ing Oswald’s bul­let from the course of his­tory, ought to re­turn an an­swer of Kennedy not get­ting shot.

“Rea­son­able!” you say. “Ought!” you say.

But that’s not the point; the point is that if you do pick some fixed com­pu­ta­tional pro­ce­dure, whether it is rea­son­able or not, then ei­ther it will say that Kennedy gets shot, or not, and what it says will de­pend on the em­piri­cal state of the world. So that, if you tell me, “I be­lieve that this-and-such coun­ter­fac­tual con­strual, run over Oswald’s re­moval, pre­serves Kennedy’s life”, then I can de­duce that you don’t be­lieve in the con­spir­acy.

In­deed, so long as we take this com­pu­ta­tional pro­ce­dure as fixed, then the ac­tual state of the world (which ei­ther does in­clude a con­spir­acy, or does not) pre­sents a ready truth-value for the out­put of the coun­ter­fac­tual.

In gen­eral, if you give me a fixed com­pu­ta­tional pro­ce­dure, like “mul­ti­ply by 7 and add 5”, and then you point to a 6-sided die un­der­neath a cup, and say, “The re­sult-of-pro­ce­dure is 26!” then it’s not hard at all to as­sign a truth value to this state­ment. Even if the ac­tual die un­der the cup only ever takes on the val­ues be­tween 1 and 6, so that “26” is not found any­where un­der the cup. The state­ment is still true if and only if the die is show­ing 3; that is its em­piri­cal truth-con­di­tion.

And what about the state­ment ((3 * 7) + 5) = 26? Where is the truth-con­di­tion for that state­ment lo­cated? This I don’t know; but I am nonethe­less quite con­fi­dent that it is true. Even though I am not con­fi­dent that this ‘true’ means ex­actly the same thing as the ‘true’ in “the bucket is ‘true’ when it con­tains the same num­ber of peb­bles as sheep in the pas­ture”.

So if some­one I trust—pre­sum­ably some­one I re­ally trust—tells me, “If Oswald hadn’t shot Kennedy, some­one else would have”, and I be­lieve this state­ment, then I be­lieve the em­piri­cal re­al­ity is such as to make the coun­ter­fac­tual com­pu­ta­tion come out this way. Which would seem to im­ply the con­spir­acy. And I will an­ti­ci­pate ac­cord­ingly.

Or if I find out that there was a con­spir­acy, then this will con­firm the truth-con­di­tion of the coun­ter­fac­tual—which might make a bit more sense than say­ing, “Con­firm that the coun­ter­fac­tual is true.”

But how do you ac­tu­ally com­pute a coun­ter­fac­tual? For this you must con­sult Judea Pearl. Roughly speak­ing, you perform surgery on graph­i­cal mod­els of causal pro­cesses; you sever some vari­ables from their or­di­nary par­ents and sur­gi­cally set them to new val­ues, and then re­calcu­late the prob­a­bil­ity dis­tri­bu­tion.

There are other ways of defin­ing coun­ter­fac­tu­als, but I con­fess they all strike me as en­tirely odd. Even worse, you have philoso­phers ar­gu­ing over what the value of a coun­ter­fac­tual re­ally is or re­ally means, as if there were some coun­ter­fac­tual world ac­tu­ally float­ing out there in the philo­soph­i­cal void. If you think I’m at­tack­ing a straw­per­son here, I in­vite you to con­sult the philo­soph­i­cal liter­a­ture on New­comb’s Prob­lem.

A lot of philos­o­phy seems to me to suffer from “naive philo­soph­i­cal re­al­ism”—the be­lief that philo­soph­i­cal de­bates are about things that au­to­mat­i­cally and di­rectly ex­ist as prop­er­tied ob­jects float­ing out there in the void.

You can talk about an ideal com­pu­ta­tion, or an ideal pro­cess, that would ideally be ap­plied to the em­piri­cal world. You can talk about your un­cer­tain be­liefs about the out­put of this ideal com­pu­ta­tion, or the re­sult of the ideal pro­cess.

So long as the com­pu­ta­tion is fixed, and so long as the com­pu­ta­tional it­self is only over ac­tu­ally ex­is­tent things. Or the re­sults of other com­pu­ta­tions pre­vi­ously defined—you should not have your com­pu­ta­tion be over “nearby pos­si­ble wor­lds” un­less you can tell me how to com­pute those, as well.

A chief sign of naive philo­soph­i­cal re­al­ism is that it does not tell you how to write a com­puter pro­gram that com­putes the ob­jects of its dis­cus­sion.

I have yet to see a cam­era that peers into “nearby pos­si­ble wor­lds”—so even af­ter you’ve an­a­lyzed coun­ter­fac­tu­als in terms of “nearby pos­si­ble wor­lds”, I still can’t write an AI that com­putes coun­ter­fac­tu­als.

But Judea Pearl tells me just how to com­pute a coun­ter­fac­tual, given only my be­liefs about the ac­tual world.

I strongly priv­ilege the real world that ac­tu­ally ex­ists, and to a slightly lesser de­gree, log­i­cal truths about math­e­mat­i­cal ob­jects (prefer­ably finite ones). Any­thing else you want to talk about, I need to figure out how to de­scribe in terms of the first two—for ex­am­ple, as the out­put of an ideal com­pu­ta­tion run over the em­piri­cal state of the real uni­verse.

The ab­sence of this re­quire­ment as a con­di­tion, or at least a goal, of mod­ern philos­o­phy, is one of the pri­mary rea­sons why mod­ern philos­o­phy is of­ten sur­pris­ingly use­less in my AI work. I’ve read whole books about de­ci­sion the­ory that take coun­ter­fac­tual dis­tri­bu­tions as givens, and never tell you how to com­pute the coun­ter­fac­tu­als.

Oh, and to talk about “the prob­a­bil­ity that John F. Kennedy was shot, given that Lee Har­vey Oswald didn’t shoot him”, we write:

P(Kennedy_shot|Oswald_not)

And to talk about “the prob­a­bil­ity that John F. Kennedy would have been shot, if Lee Har­vey Oswald hadn’t shot him”, we write:

P(Oswald_not []-> Kennedy_shot)

That lit­tle sym­bol there is sup­posed to be a box with an ar­row com­ing out of it, but I don’t think Uni­code has it.

Part of The Me­taethics Sequence

Next post: “Math is Sub­junc­tively Ob­jec­tive

Pre­vi­ous post: “Ex­is­ten­tial Angst Fac­tory