It seems like what is needed is a logical uncertainty model that allows things that PA would consider contradictions (so we can reason about finite agents outputting something they don’t actually output), and that is causal. Some of the models in in Paul’s Non Omniscience, Probabilistic Inference, and Metamathematics paper allow contradictions in a satisfying way, but don’t contain any causation.
It seems that we want “this agent outputs this” to cause “consequences of this decision happen”. Suppose we create a Boolean variable True(A) for every logical proposition A, and we want to arrange them in something like a causal Bayesian network. Since the consequences of an action are a logical consequence of the agent’s output that can be seen within a small number of steps, perhaps we want to arrange the network so that the premises of an inference rule being true will cause their conclusion to be true (with high probability; we still want some probability of contradiction). But if we naively create causal arrows from the premises of an inference rule (such as modus ponens) to its conclusion (for example, allow True(A) and True(A→B) to jointly cause B) then we get cycles. I’m not sure if causation is well-defined in cyclic graphs, but if it’s not then maybe there is a way to fix this by deleting some of the causal arrows?
Yeah, causation in logical uncertainty land would be nice. It wouldn’t necessarily solve the whole problem, though. Consider the scenario
outcomes = [3, 2, 1, None]
strategies = {Hi, Med, Low}
A = lambda: Low
h = lambda: Hi
m = lambda: Med
l = lambda: Low
payoffs = {}
payoffs[h()] = 3
payoffs[m()] = 2
payoffs[l()] = 1
E = lambda: payoffs.get(A())
Now it’s pretty unclear that (lambda: Low)()==Hi should logically cause E()=3.
When considering (lambda: Low)()==Hi, do we want to change l without A, A without l, or both? These correspond to answers None, 3, and 1 respectively.
Ideally, a causal-logic graph would be able to identify all three answers, depending on which question you’re asking. (This actually gives an interesting perspective on whether or not CDT should cooperate with itself on a one-shot PD: it depends; do you think you “could” change one but not the other? The answer depends on the “could.”) I don’t think there’s an objective sense in which any one of these is “correct,” though.
It seems like what is needed is a logical uncertainty model that allows things that PA would consider contradictions (so we can reason about finite agents outputting something they don’t actually output), and that is causal. Some of the models in in Paul’s Non Omniscience, Probabilistic Inference, and Metamathematics paper allow contradictions in a satisfying way, but don’t contain any causation.
It seems that we want “this agent outputs this” to cause “consequences of this decision happen”. Suppose we create a Boolean variable True(A) for every logical proposition A, and we want to arrange them in something like a causal Bayesian network. Since the consequences of an action are a logical consequence of the agent’s output that can be seen within a small number of steps, perhaps we want to arrange the network so that the premises of an inference rule being true will cause their conclusion to be true (with high probability; we still want some probability of contradiction). But if we naively create causal arrows from the premises of an inference rule (such as modus ponens) to its conclusion (for example, allow True(A) and True(A→B) to jointly cause B) then we get cycles. I’m not sure if causation is well-defined in cyclic graphs, but if it’s not then maybe there is a way to fix this by deleting some of the causal arrows?
Yeah, causation in logical uncertainty land would be nice. It wouldn’t necessarily solve the whole problem, though. Consider the scenario
Now it’s pretty unclear that
(lambda: Low)()==Hi
should logically causeE()=3
.When considering
(lambda: Low)()==Hi
, do we want to changel
withoutA
,A
withoutl
, or both? These correspond to answersNone
,3
, and1
respectively.Ideally, a causal-logic graph would be able to identify all three answers, depending on which question you’re asking. (This actually gives an interesting perspective on whether or not CDT should cooperate with itself on a one-shot PD: it depends; do you think you “could” change one but not the other? The answer depends on the “could.”) I don’t think there’s an objective sense in which any one of these is “correct,” though.