Lo­gical Coun­ter­fac­tu­als are low-res

Link post

Cross-pos­ted from my blog. Related to my sev­eral pre­vi­ous posts.

(Epistemic status: I have no idea why such an ob­vi­ous ob­ser­va­tion is never even men­tioned by the de­cision the­or­ists. Or maybe it is, I have not seen it.)

A lo­gical coun­ter­fac­tual, as de­scribed by Nate Soares:

In a set­ting with de­term­in­istic al­gorithmic agents, how (in gen­eral) should we eval­u­ate the ex­pec­ted value of the hy­po­thet­ical pos­sible worlds in which the agent’s de­cision al­gorithm takes on dif­fer­ent out­puts, given that all but one of those worlds is lo­gic­ally im­possible?

So we know that some­thing happened in the past, but want to con­sider some­thing else hav­ing happened in­stead un­der the same cir­cum­stances. Like a ubi­quit­ous and fa­mil­iar to every­one lament of a de­cision made: “I knew bet­ter than do­ing X and should have done Y in­stead”. It feels like there was a choice, and one could have made a dif­fer­ent choice even while be­ing the same per­son un­der the same cir­cum­stances.

Of course, a “de­term­in­istic al­gorithmic agent” would make the same de­cisions in the same cir­cum­stances, so what one really asks is “what kind of a [small] dif­fer­ence in the agent’s al­gorithm, and/​or what kind of [min­im­ally] dif­fer­ent in­puts into the al­gorithm’s de­cision-mak­ing pro­cess would res­ult in a dif­fer­ent out­put?” When phrased in this way, we are de­scrib­ing dif­fer­ent “hy­po­thet­ical pos­sible worlds”. Some of these worlds cor­res­pond to dif­fer­ent al­gorithms, and some to dif­fer­ent in­puts to the same al­gorithm, and in this way they are just as “lo­gic­ally pos­sible” as the world where the agent we ob­served took an ac­tion we ob­served.

Why does it feel like there was both a lo­gical pos­sib­il­ity and im­possib­il­ity, “the agent’s de­cision al­gorithm takes on dif­fer­ent out­puts”? Be­cause the world we see is in low res­ol­u­tion! Like when you see a road from high up, it looks like a single lane:

But when you zoom in, you see some­thing look­ing like this, with a maze of lanes, ramps and over­passes:

So how does one an­swer Nate’s ques­tion? (Which was not really a ques­tion, but maybe should have been) Zoom In! See what the agent’s al­gorithm is like, what hard­ware it runs on, what un­seen-from-high-up in­puts can cause it to switch lanes and take a dif­fer­ent turn. So that the worlds which look coun­ter­fac­tual from high up (e.g. a car could have turned left, even though it has turned right) be­comes phys­ic­ally un­likely when zoomed in (e.g. the car was in the right lane with no other way to go but right, short of jump­ing a di­vider). Treat­ing the ap­par­ent lo­gical coun­ter­fac­tu­als as any other phys­ical un­cer­tainty seems like a more prom­ising ap­proach than in­vent­ing some spe­cial treat­ment for what ap­pears an im­possible pos­sible world.