Circular Counterfactuals “Only that which Happens is Possible”

“Only that which happens is possible.” -The Megarian School

You make a prediction, it doesn’t happen. Could it have?

Let’s ignore the cognitive bias literature here, and Tetlock’s wonderful data about how many forecasters claim that they were very, very close to being correct, if it weren’t for that one little thing that got in their way. Instead, let’s focus on the logic and metaphysics. Could it have happened? Immediately we are thrust into the world of counterfactual reasoning. What is the status of the counterfactual? Do they exist as an ontological primitive or are they something we discover/​invent as a heuristic for thinking? The ramifications are pretty far reaching; for counterfactuals are a central tool for the way we think about causality. In other words, because many scientific fields employ counterfactual reasoning and search for the causes of phenomena, an account of counterfactuals which supplies a firm foundation should allow us to build clearer and cleaner models for interpreting the world, and in the case of AI, models which interpret the world.

While Tetlock’s is interested in what habits of thought are required to be skilled at making true counterfactual statements. And others on LW are interested in formalisms and engineering those formalisms to see what comes out. In this paper, I am interested in what metaphysical presuppositions are possible. The purpose here is to see how counterfactuals fit into epistemology, and how different metaphysical assumptions change our understanding of what counterfactuals are. (By metaphysical assumptions, I mean assumptions about the nature of causality and our capacity to identify causal connections).

I started down investigations of this sort in 2017, because I was worried about the ontological status of two common heuristic tools in behavioral science and LW: the distinction between the inside/​outside view and the formation of priors. The distinction between the inside/​outside view seemed to me subjective, and priors were (at the time) assumed to be almost arbitrary, as in it did not matter what priors one started out with so long as they were subject to updating new evidence. This also seemed wrong to me. I was interested in where good priors come from, the developmental process that leads to “priors hungry for updating”. In 2019 I worked on an Adversarial Collaboration which never came to fruition on the nature of counterfactuals. We wrote about 44 pages of preliminary text before my coauthor left the project.

Since then, (at the behest of a bounty) I have refined a few more thoughts on counterfactuals, but I do not think I have found the perfect coherent account. One reason for this, is that while reasoning about counterfactuals, I feel forced to switch between different systems for thinking about causality. This creates two systems for thinking about counterfactuals, neither of which I think are exhaustive, nor do they perfectly lead into one another. Perhaps someone else will be able to fully work out my two sets into a complete system. But for now, I offer my best account.

Set 1: Counterfactuals as a Causal Claim

Causality is the central pin around which counterfactuals rotate. So perhaps a short investigation of counterfactuals and causality will demonstrate whether the relationship between counterfactuals and causality is circular. And that’s where my first set of propositions come from. They are essentially equivalent to Judea Pearl’s views.

The argument works as such. Within any model we may assume that:

  1. Counterfactuals are specific type of conditional prediction.

  2. Conditional predictions are specific type of causal claim, that is, one in which two items are linked by a correlation.

  3. Causal claims are conditional predictions.

  4. Thus, any counterfactual claim can be restated as a conditional prediction.

    And finally,

  5. There is a feedback loop in acts of observation that allow us to create more conditional predictions.

Consider the prediction inherent in this sentence from Ned Hall: “If I had not ducked, the boulder would have knocked my skull.” The prediction is that the boulder would have hit the hiker’s head. The prediction, right or wrong, contains a causal claim – the claim that the forces acting on the falling boulder were such that it would collide with the hiker’s skull. But here is the tricky part! The causal claim contains a counterfactual.

The boulder will hit location X, if nothing changes. [Conditional Prediction]

The boulder would have hit X, if the hiker did not move X. [Counterfactual]

I think this demonstrates a circular, i.e. tautological, relationship between some types of causal claims and counterfactual claims.

But wait a second! I said above, the “prediction, right or wrong, contains a causal claim.” Could the prediction have been right? If the prediction were right, something would have been different. But what would have been different? A dizzying number of things could have been different in the causal history of the universe which would have resulted in the hiker being hit by a rock. But that’s only if some causal relationships could have been different.

And so, notice what I smuggled in: the idea of the hiker not moving. But my counterfactual could have been based upon any number of counterfactual claims of different kinds. “If the hiker did not see it” puts the counterfactual on the hiker’s functions and agency. “If the boulder did not hit that ridge,” places the counterfactual on the physical structure of the mountain. “If the hiker was not trained in Brazilian Jujitsu” places the counterfactual on some historical event in the hiker’s past. But there is no principled limit.

Like Clockwork… Metaphysics without Chaos

Let’s stick with a simple metaphysics. A standard 19th century one is that everything has a cause, probabilities are in the observation of phenomena, not in the phenomena themselves. Therefore, if we could account for all phenomena at time t then we could account for all phenomena at time t – 1 and t + 1.

Even if we take the 19th century physicalist metaphysics as strictly true, then counterfactuals still have no independent existence outside of the causal account. The counterfactual claim is useful because it allows us to bracket off the universe. The causal claim and the counterfactual claim are not made simultaneously.

But notice we are talking about causal and counterfactual claims in language.

We can go deeper into this line of inquiry and make a stronger claim about counterfactuals. Counterfactuals are deeper than the mere result of observations in an attempt to formulate more accurate causal claims. The formulation of a causal claim implies a deeper type of counterfactual. Consider the following example taken more-or-less from Kant.

I am sitting on the beach on a bright day and my exoskeleton warms up. This perception registers in my brain as a relationship between “the sun” and warming up. And as stated earlier, such a perception implies a counterfactual, in this case a testable one, that if I were to remove myself from the “sun” I would cease heating up. But perhaps at some point I am able to make a causal claim. “The sun causes heat through its light.” An understanding of the term “sun” will imply a counterfactual statement that is different in type from the first one. That statement is one of logical necessity: If it doesn’t emit light, it’s not the sun.

This formulation of counterfactual claims and causal claims as logically contemporaneous implies that understanding causality is equivalent to understanding counterfactuals.

So, what we have here is the idea that if counterfactuals are taken to be a mode of conditional prediction, and we can also take them as a logically equivalent transformation of a causal claim.

I wanted to try to apply this reasoning to Newcomb’s Problem in two ways. In the first way, I remove all agents to demonstrate counterfactuals and causality. In the second instance, I apply the logic to the boring old Newcomb’s problem. This I think will demonstrate that counterfactual formation determines our understanding of Newcomb’s Problem.

Application to Newcomb’s Atmospheric Molecules:

A sexless oxygen atom might bond with either O^2 or CO. If it bonds with O^2, it gets a thousand dollars. If it bonds with CO it gets a million dollars. If placed equidistant from the two compounds, it bonds with CO every time. It never bonds with both for the biggest possible reward. But oxygen doesn’t care about money, it just bonds with whatever allows it to fill its orbital shells most easily, which happens to be CO. There is no tricking out the laws of chemistry to make the atom become CO3.

Either O + CO → CO2, or O + O2 → Ozone

This is our prediction about the two possible outcome states.

Our observation is that O + CO → CO2. Yay! 1 million dollars for the Oxygen atom!

So then we can start making counterfactual claims of which there is no limit. “It did not create O3 because reasons.” “It did not create CO3, because reasons.” This rather contrived and silly example of a counterfactual is to point out that nothing within the notion of a counterfactual relies upon intelligence. Counterfactuals do not merely reflect causal claims, but rather are an endless opportunity for new causal conjectures, most of which are nonsence. If a counterfactual is about the world, then it must be circular with reference to predictions and observations.

The General Form of Principal-Agent games begin with a small causal claim about a causal chain and build it build observations into predictions and counterfactuals for a more robust model of behavior.

Could your prediction have gone right? Yes, but only if the predictions and observations of the agents had been different. This is not about any decision algorithm per se, but the results of an algorithm must have the possibility of being different in order for the prediction to have gone right. In the case of molecules in a vacuum, there is no difference possible.

When we define counterfactuals with reference to causal claims, counterfactuals are circular, in that an account of counterfactuals requires using counterfactual reasoning, because all accounts require counterfactual reasoning. In the same way, all accounts of causality make use of causality.

But what if counterfactuals are not about causal claims? Many of them don’t seem to be, do they? In fact, many counterfactual claims are made without a second thought as to whether the counterfactual is possible or whether the event in question was “overdetermined,” which brings us out beyond the original hypothesis to a new set of questions.

+++

Set 2: Counterfactuals as Claims About Unobserved Causal Chains

Counterfactuals require making a claim about causal chains. So it is still a prediction, but a strong version of one, an algorithm dependent prediction. This type of counterfactual is a claim about a causal chain that didn’t occur or, perhaps even, can’t be observed.

So, the argument for the truth of the unobserved causal chain must be based on an analogy to some observed causal chain. The analogy will only work if the observed causal chain is isomorphic to the unobserved causal chain. Therefore, counterfactuals are claims about the isomorphism of an unobserved causal chain.

Consider the Viking explorations of the America’s. Could the Vikings have established a medieval empire in the Americas? In all historical What If scenarios, the argument will rest on a presumed geometric similarity between the causal forces in the scenario that did and the one that didn’t occur.

However, sometimes we can control the environment and test counterfactual claims. In those cases, we can check the isomorphism. When this occurs a counterfactual is ultimately not a claim about what is possible, but what happens. Only that which happens is possible, the rest is inferential conjecture.

General form: Counterfactuals are Claims about Unobserved Causal Chains meaning that they imply an entire network of causal-relations claims. However, since everything has a cause, a mistaken counterfactual is not ‘almost’ correct.

In this second account, I did not need to make a claim about unobserved causal chains in order to explain counterfactuals. That is because in this case a counterfactual is special type of causal claim and not a type I need to make in order to explain what a counterfactual is.

The application is thus: Could your prediction have gone right? No. The prediction was always going to be wrong because the predictor’s implicit claim about causal chains was incorrect.

So here in this paper I have pointed out a distinction between two different types of counterfactuals. The first type is essentially a related form of any causal claim, and the second type is set of unobserved causal chains.

Here are two examples of each.

  1. If not for its heat, the sun would be cold.

  2. If not for Thomas Schelling, there would be no book The Strategy of Conflict 1960.

  3. If the Athenians had stuck with Pericles’ strategy, they would have won the Peloponnesian War.

  4. If the Federal Reserve doesn’t raise interest rates, there will be endemic inflation.

I intentionally chose statements which look very similar but in fact are quite different.

The first two statements are based off a causal observation of the terms’ uses.

The second two statements are based off a causal conjecture of unstated relationships between the terms. That is an implicit understanding of a possible causal connection.

One could take response that the distinction is one of degree and not of kind in two directions. The Humean objection is that the terms are conjoined by the mind, not by anything outside of it. But that would leave us in the odd situation in which we commit ourselves to the idea that no relationships are causal. I think most people here would reject that.

Alternatively, we could say that all causal claims are probabilistic and that the more steps there are between the terms and the more difficult to prove the causal chain become. However, this conflates certain types of causality. To soften the dilemma, I just offer the very normal caution to pay attention to the class types one is dealing with.

Stars and heat are logically necessarily connected. Thomas Schelling and “author of The Strategy of Conflict” is an identity relationship.

A particular strategy and an outcome of a war is a claim about the relationship among a complex set of interacting factors. And the same goes for the Federal Reserve, inflation, and interest rates. (Complexity introduces dynamic and nonlinear relationships, stepwise functions, and thus the possibility for different types of causality, than a mere monist view of causes).

Final Question and Aside for Future Consideration:

I haven’t had the time to flesh this one out. But let me give one motivating example and leave it at that. Within one domain of knowledge Orion knows K and Perpetua knows L. Lorien knows K U L, and nothing more. Since Orion doesn’t know L, how can Orion figure out what he needs to learn in order to know K U L and nothing more? Perpetua can’t help him because she doesn’t know K. Only Lorien is capable of taking the Knowledge Space and constructing a Learning Space, which goes from K to K U L with no gaps and no excess. To teach Orion step y, Lorien is going to have to create a causal diagram that goes from the original state to the desired state. The causal diagram will be reverse engineered back to Orion’s current knowledge state. Each step will require a correct counterfactual claim. But how does Lorien check her claims about the learning space are correct without testing them on Orion? Is it possible? How close can she get?

Conclusion

TL;DR I have pointed to two types of counterfactual claims: ones that are based upon an explicit causal model, and ones based upon an implicit causal model. Counterfactuals based upon an explicit causal model are circular with respect to causality because they are merely alternate forms of a conditional prediction based on the model.

On the other hand, many counterfactual claims are based upon implicit causal models and serve as a guide for framing the problem. I have been calling them causal conjectures of unstated relationships. But one could also call them implicit causal claims, or claims of possible isomorphism, or all else being equal, or mutatis mutandis counterfactuals. The basic idea is that in these counterfactuals one is making a claim that x could change and all else remain unchanged creating an isomorphism between our world and the counterfactual world. These are the interesting type of counterfactuals, but they are also ones in which mathematical chaos is more likely, and thus the counterfactual claim itself is more likely to be nonsense.