When I write the program, “If a sensor is activated, then energize a light”, I’m using a counterfactual to compactly specify a causal dependency that I want a software system to build into a hardware circuit. If the computation by which an agent influences a physical observable (corresponding to its decision) takes into account the environment’s dependence on the value of that observable, the agent can similarly model the environment’s dependence on its decision via counterfactuals of the form “if decision=d, then outcome=u”. It can also model how its decision depends on environmental consequences via its decision computation using a counterfactual: “if the computation which finds the decision value resulting in a world with maximal utility (though the environment’s dependence on my decision)” = Make burritos, then decision=Make burritos”.
Material implication is usually read as “if x, then y”, which is superficially similar to “if sensor activated, then energize a light”, but as near as I can tell, it doesn’t capture counterfactual relations at all: “antecedent or not consequent” has practically nothing to do with causal relations by which values are assigned to observables. But If you want to reason about consequential decisions, you need consequence simulating machinery.
It looks like you’re trying to rebuild that consequential machinery in logic from scratch. I don’t understand provability logic, and can’t competently evaluate whether giving an agent procedures which work by never being evaluated or the trick you’re using in this post will allow material implication to behave usefully. I have heard however, that the Curry-Howard isomorphism establishes strong correspondences between proofs and programs, so it seems to me that if evaluating counterfactuals is not as a problem for decision theories that use Pearl’s causal calculus or the counterfactuals built into programming languages (or universal logic gates, or whatever), then it shouldn’t be a problem for logical decision theories: just directly represent Pearl’s already mathematically well specified causal calculus (or maybe lambda calculus, Hoare logic, or circuits of controlled not gates) in logic.
Is that possible? Sensible? More difficult? Something?
When I write the program, “If a sensor is activated, then energize a light”, I’m using a counterfactual to compactly specify a causal dependency that I want a software system to build into a hardware circuit. If the computation by which an agent influences a physical observable (corresponding to its decision) takes into account the environment’s dependence on the value of that observable, the agent can similarly model the environment’s dependence on its decision via counterfactuals of the form “if decision=d, then outcome=u”. It can also model how its decision depends on environmental consequences via its decision computation using a counterfactual: “if the computation which finds the decision value resulting in a world with maximal utility (though the environment’s dependence on my decision)” = Make burritos, then decision=Make burritos”.
Material implication is usually read as “if x, then y”, which is superficially similar to “if sensor activated, then energize a light”, but as near as I can tell, it doesn’t capture counterfactual relations at all: “antecedent or not consequent” has practically nothing to do with causal relations by which values are assigned to observables. But If you want to reason about consequential decisions, you need consequence simulating machinery.
It looks like you’re trying to rebuild that consequential machinery in logic from scratch. I don’t understand provability logic, and can’t competently evaluate whether giving an agent procedures which work by never being evaluated or the trick you’re using in this post will allow material implication to behave usefully. I have heard however, that the Curry-Howard isomorphism establishes strong correspondences between proofs and programs, so it seems to me that if evaluating counterfactuals is not as a problem for decision theories that use Pearl’s causal calculus or the counterfactuals built into programming languages (or universal logic gates, or whatever), then it shouldn’t be a problem for logical decision theories: just directly represent Pearl’s already mathematically well specified causal calculus (or maybe lambda calculus, Hoare logic, or circuits of controlled not gates) in logic.
Is that possible? Sensible? More difficult? Something?