Yeah, causation in logical uncertainty land would be nice. It wouldn’t necessarily solve the whole problem, though. Consider the scenario
outcomes = [3, 2, 1, None]
strategies = {Hi, Med, Low}
A = lambda: Low
h = lambda: Hi
m = lambda: Med
l = lambda: Low
payoffs = {}
payoffs[h()] = 3
payoffs[m()] = 2
payoffs[l()] = 1
E = lambda: payoffs.get(A())
Now it’s pretty unclear that (lambda: Low)()==Hi should logically cause E()=3.
When considering (lambda: Low)()==Hi, do we want to change l without A, A without l, or both? These correspond to answers None, 3, and 1 respectively.
Ideally, a causal-logic graph would be able to identify all three answers, depending on which question you’re asking. (This actually gives an interesting perspective on whether or not CDT should cooperate with itself on a one-shot PD: it depends; do you think you “could” change one but not the other? The answer depends on the “could.”) I don’t think there’s an objective sense in which any one of these is “correct,” though.
Yeah, causation in logical uncertainty land would be nice. It wouldn’t necessarily solve the whole problem, though. Consider the scenario
Now it’s pretty unclear that
(lambda: Low)()==Hi
should logically causeE()=3
.When considering
(lambda: Low)()==Hi
, do we want to changel
withoutA
,A
withoutl
, or both? These correspond to answersNone
,3
, and1
respectively.Ideally, a causal-logic graph would be able to identify all three answers, depending on which question you’re asking. (This actually gives an interesting perspective on whether or not CDT should cooperate with itself on a one-shot PD: it depends; do you think you “could” change one but not the other? The answer depends on the “could.”) I don’t think there’s an objective sense in which any one of these is “correct,” though.