Logical Counterfactuals are low-res

Link post

Cross-posted from my blog. Related to my several previous posts.

(Epistemic status: I have no idea why such an obvious observation is never even mentioned by the decision theorists. Or maybe it is, I have not seen it.)

A logical counterfactual, as described by Nate Soares:

In a setting with deterministic algorithmic agents, how (in general) should we evaluate the expected value of the hypothetical possible worlds in which the agent’s decision algorithm takes on different outputs, given that all but one of those worlds is logically impossible?

So we know that something happened in the past, but want to consider something else having happened instead under the same circumstances. Like a ubiquitous and familiar to everyone lament of a decision made: “I knew better than doing X and should have done Y instead”. It feels like there was a choice, and one could have made a different choice even while being the same person under the same circumstances.

Of course, a “deterministic algorithmic agent” would make the same decisions in the same circumstances, so what one really asks is “what kind of a [small] difference in the agent’s algorithm, and/​or what kind of [minimally] different inputs into the algorithm’s decision-making process would result in a different output?” When phrased in this way, we are describing different “hypothetical possible worlds”. Some of these worlds correspond to different algorithms, and some to different inputs to the same algorithm, and in this way they are just as “logically possible” as the world where the agent we observed took an action we observed.

Why does it feel like there was both a logical possibility and impossibility, “the agent’s decision algorithm takes on different outputs”? Because the world we see is in low resolution! Like when you see a road from high up, it looks like a single lane:

But when you zoom in, you see something looking like this, with a maze of lanes, ramps and overpasses:

So how does one answer Nate’s question? (Which was not really a question, but maybe should have been) Zoom In! See what the agent’s algorithm is like, what hardware it runs on, what unseen-from-high-up inputs can cause it to switch lanes and take a different turn. So that the worlds which look counterfactual from high up (e.g. a car could have turned left, even though it has turned right) becomes physically unlikely when zoomed in (e.g. the car was in the right lane with no other way to go but right, short of jumping a divider). Treating the apparent logical counterfactuals as any other physical uncertainty seems like a more promising approach than inventing some special treatment for what appears an impossible possible world.