Logical Counterfactuals are low-res

Link post

Cross-posted from my blog. Re­lated to my sev­eral pre­vi­ous posts.

(Epistemic sta­tus: I have no idea why such an ob­vi­ous ob­ser­va­tion is never even men­tioned by the de­ci­sion the­o­rists. Or maybe it is, I have not seen it.)

A log­i­cal coun­ter­fac­tual, as de­scribed by Nate Soares:

In a set­ting with de­ter­minis­tic al­gorith­mic agents, how (in gen­eral) should we eval­u­ate the ex­pected value of the hy­po­thet­i­cal pos­si­ble wor­lds in which the agent’s de­ci­sion al­gorithm takes on differ­ent out­puts, given that all but one of those wor­lds is log­i­cally im­pos­si­ble?

So we know that some­thing hap­pened in the past, but want to con­sider some­thing else hav­ing hap­pened in­stead un­der the same cir­cum­stances. Like a ubiquitous and fa­mil­iar to ev­ery­one lament of a de­ci­sion made: “I knew bet­ter than do­ing X and should have done Y in­stead”. It feels like there was a choice, and one could have made a differ­ent choice even while be­ing the same per­son un­der the same cir­cum­stances.

Of course, a “de­ter­minis­tic al­gorith­mic agent” would make the same de­ci­sions in the same cir­cum­stances, so what one re­ally asks is “what kind of a [small] differ­ence in the agent’s al­gorithm, and/​or what kind of [min­i­mally] differ­ent in­puts into the al­gorithm’s de­ci­sion-mak­ing pro­cess would re­sult in a differ­ent out­put?” When phrased in this way, we are de­scribing differ­ent “hy­po­thet­i­cal pos­si­ble wor­lds”. Some of these wor­lds cor­re­spond to differ­ent al­gorithms, and some to differ­ent in­puts to the same al­gorithm, and in this way they are just as “log­i­cally pos­si­ble” as the world where the agent we ob­served took an ac­tion we ob­served.

Why does it feel like there was both a log­i­cal pos­si­bil­ity and im­pos­si­bil­ity, “the agent’s de­ci­sion al­gorithm takes on differ­ent out­puts”? Be­cause the world we see is in low re­s­olu­tion! Like when you see a road from high up, it looks like a sin­gle lane:

But when you zoom in, you see some­thing look­ing like this, with a maze of lanes, ramps and over­passes:

So how does one an­swer Nate’s ques­tion? (Which was not re­ally a ques­tion, but maybe should have been) Zoom In! See what the agent’s al­gorithm is like, what hard­ware it runs on, what un­seen-from-high-up in­puts can cause it to switch lanes and take a differ­ent turn. So that the wor­lds which look coun­ter­fac­tual from high up (e.g. a car could have turned left, even though it has turned right) be­comes phys­i­cally un­likely when zoomed in (e.g. the car was in the right lane with no other way to go but right, short of jump­ing a di­vider). Treat­ing the ap­par­ent log­i­cal coun­ter­fac­tu­als as any other phys­i­cal un­cer­tainty seems like a more promis­ing ap­proach than in­vent­ing some spe­cial treat­ment for what ap­pears an im­pos­si­ble pos­si­ble world.