This usually works fine, but there are some cases where this fails to correctly compute the outcomes (namely, where others are reasoning about the contents A, where their internal representations of A were not affected by your do(A=a)).
I still think this should be solved by the physics module.
For example, consider two cases. In case A, Ekman reads everything you’ve ever written on decision theory before September 26th, 2014, and then fills the boxes as if he were Omega, and then you choose whether to one-box or two-box. Ekman’s a good psychologist, but his model of your mind is translucent to you at best- you think it’s more likely than not that he’ll guess correctly what you’ll pick, but know that it’s just mediated by what you’ve written that you can’t change.
In case B, Ekman watches your face as you choose whether to press the one-box button or the two-box button without being able to see the buttons (or your finger), and then predicts your choice. Again, his model of your mind is translucent at best to you; probably he’ll guess correctly, but you don’t know what specifically he’s basing his decision off of (and suppose that even if you did, you know that you don’t have sufficient control over your features to prevent information from leaking).
It seems to me that the two cases deserve different responses- in case A, you don’t think your current thoughts will impact Ekman’s move, but in case B, you do. In a normal token trade, you don’t think your current thoughts will impact your partner’s move, but in a mirror token trade, you do. Those differences in belief are because of actual changes in the perceived causal features of the situation, which seems sensible to me.
That is, I think this is a failure of the process you’re using to build causal maps, not the way you’re navigating those causal maps once they’re built. I keep coming back to the criterion “does a missing arrow imply independence?” because that’s the primary criterion for building useful causal maps, and if you have ‘logical nodes’ like “the decision made by an agent with a template X” then it doesn’t make sense to have a copy of that logical node elsewhere that’s allowed to have a distinct value.
That is, I agree that this question is important:
What does it mean to consider that a deterministic algorithm returns something that it doesn’t return?
But my answer to it is “don’t try to intervene at a node unless your causal model was built under the assumption you could intervene at that node.” The mirror token trade causal map you used in this post works if you intervene at ‘template,’ but I argue it doesn’t work if you intervene at ‘give?’ unless there’s an arrow that points from ‘give?’ to ‘their decision.’
It sounds like we probably have similar intuitions about decision theory, but perhaps different ideas about what the do(.) function is capable of?
I think I see do(.) operator as less capable than you do; in cases where the physicality of our computation matters then we need to have arrows pointing out of the node where we intervene that we don’t need when we can ignore the impacts of having to physically perform computations in reality. Furthermore, it seems to me that when we’re at the level where how we physically process possibilities matters, ‘decision theory’ may not be a useful concept anymore.
Cool, it sounds like we mostly agree. For instance, I agree that once you set up the graph correctly, you can intervene do(.) style and get the Right Answer. The general thrust of these posts is that “setting up the graph correctly” involves drawing in lines / representing world-structure that is generally considered (by many) to be “non-causal”.
Figuring out what graph to draw is indeed the hard part of the problem—my point is merely that “graphs that represent the causal structure of the universe and only the causal structure of the universe” are not the right sort of graphs to draw, in the same way that a propensity theory of probability that only allows information to propagate causally is not a good way to reason about probabilities.
Figuring out what sort of graphs we do want to intervene on requires stepping beyond a purely causal decision theory.
I still think this should be solved by the physics module.
For example, consider two cases. In case A, Ekman reads everything you’ve ever written on decision theory before September 26th, 2014, and then fills the boxes as if he were Omega, and then you choose whether to one-box or two-box. Ekman’s a good psychologist, but his model of your mind is translucent to you at best- you think it’s more likely than not that he’ll guess correctly what you’ll pick, but know that it’s just mediated by what you’ve written that you can’t change.
In case B, Ekman watches your face as you choose whether to press the one-box button or the two-box button without being able to see the buttons (or your finger), and then predicts your choice. Again, his model of your mind is translucent at best to you; probably he’ll guess correctly, but you don’t know what specifically he’s basing his decision off of (and suppose that even if you did, you know that you don’t have sufficient control over your features to prevent information from leaking).
It seems to me that the two cases deserve different responses- in case A, you don’t think your current thoughts will impact Ekman’s move, but in case B, you do. In a normal token trade, you don’t think your current thoughts will impact your partner’s move, but in a mirror token trade, you do. Those differences in belief are because of actual changes in the perceived causal features of the situation, which seems sensible to me.
That is, I think this is a failure of the process you’re using to build causal maps, not the way you’re navigating those causal maps once they’re built. I keep coming back to the criterion “does a missing arrow imply independence?” because that’s the primary criterion for building useful causal maps, and if you have ‘logical nodes’ like “the decision made by an agent with a template X” then it doesn’t make sense to have a copy of that logical node elsewhere that’s allowed to have a distinct value.
That is, I agree that this question is important:
But my answer to it is “don’t try to intervene at a node unless your causal model was built under the assumption you could intervene at that node.” The mirror token trade causal map you used in this post works if you intervene at ‘template,’ but I argue it doesn’t work if you intervene at ‘give?’ unless there’s an arrow that points from ‘give?’ to ‘their decision.’
I think I see do(.) operator as less capable than you do; in cases where the physicality of our computation matters then we need to have arrows pointing out of the node where we intervene that we don’t need when we can ignore the impacts of having to physically perform computations in reality. Furthermore, it seems to me that when we’re at the level where how we physically process possibilities matters, ‘decision theory’ may not be a useful concept anymore.
Cool, it sounds like we mostly agree. For instance, I agree that once you set up the graph correctly, you can intervene
do(.)
style and get the Right Answer. The general thrust of these posts is that “setting up the graph correctly” involves drawing in lines / representing world-structure that is generally considered (by many) to be “non-causal”.Figuring out what graph to draw is indeed the hard part of the problem—my point is merely that “graphs that represent the causal structure of the universe and only the causal structure of the universe” are not the right sort of graphs to draw, in the same way that a propensity theory of probability that only allows information to propagate causally is not a good way to reason about probabilities.
Figuring out what sort of graphs we do want to intervene on requires stepping beyond a purely causal decision theory.