I mean what I’m saying, by the way. As long as you’re OK with assuming you can have a logical prior in the first place, I don’t see any issue with representing LCM with the diagram I made.
Yes, it means the LCM is no different to the CM, but I don’t see an issue with that. Apart from the question of involving different kinds of priors (logical vs non-logical) the two problems are indeed identical.
If I’m missing something, please tell me what it is; I’d like to know!
I agree that your diagram gives the right answer to logical Counterfactual Mugging. The problem is that it’s not formal enough, because you don’t really explain what a “logical prior” is. For example, if we have logical Counterfactual Mugging based on a digit of pi, then one of the two possible worlds is logically inconsistent. How do we know that calculating the digit of pi by a different method will give the same result in that world, rather than blow up the calculator or something? And once you give a precise definition of “logical prior”, the problems begin to look more like programs or logical formulas than causal diagrams.
That’s fair enough; the “logical prior” is definitely a relatively big assumption, although it’s very hard to justify anything other than a 50⁄50 split between the two possibilities.
However, LCM only takes place in one of the two possible worlds (the real one); the other never happens. Either way you’re calculating the digit of pi in this world, it’s just that in one of the two possibilities (which, as far as you know are equal) you are the subject of logical counterfactual surgery by Omega. Assuming this is the case, surely calculating the digit of pi isn’t going to help?
From the point of view of your decision-making algorithm, not knowing which of the two inputs it’s actually receiving (counterfactual vs real) it outputs “GIVE”. Moreover, it chooses “GIVE” not merely for counterfactual utilons, but for real ones.
Of course, we’ve assumed that the logical counterfactual surgery Omega does is a coherent concept to begin with. The whole point of Omega is that Omega gets the benefit of the doubt, but in this situation it’s definitely still worthwhile to ask whether it makes sense.
In particular, maybe it’s possible to make a logical counterfactual surgery detector that is robust even against Omega. If you can do that, then you win regardless of which way the logical coin came up. I don’t think trying to calculate the relevant digit of pi is good enough, though.
Here’s an idea for a “logical counterfactual surgery detector”: Run a sandboxed version of your proof engine that attempts to maximally entangle that digit of pi with other logical facts. For example, it might prove that “if the 10000th decimal digit of pi is 8, then ⊥”.
If you detect that the sandboxed proof engine undergoes a logical explosion, then GIVE. Otherwise, REFUSE.
I mean what I’m saying, by the way. As long as you’re OK with assuming you can have a logical prior in the first place, I don’t see any issue with representing LCM with the diagram I made.
Yes, it means the LCM is no different to the CM, but I don’t see an issue with that. Apart from the question of involving different kinds of priors (logical vs non-logical) the two problems are indeed identical.
If I’m missing something, please tell me what it is; I’d like to know!
I agree that your diagram gives the right answer to logical Counterfactual Mugging. The problem is that it’s not formal enough, because you don’t really explain what a “logical prior” is. For example, if we have logical Counterfactual Mugging based on a digit of pi, then one of the two possible worlds is logically inconsistent. How do we know that calculating the digit of pi by a different method will give the same result in that world, rather than blow up the calculator or something? And once you give a precise definition of “logical prior”, the problems begin to look more like programs or logical formulas than causal diagrams.
That’s fair enough; the “logical prior” is definitely a relatively big assumption, although it’s very hard to justify anything other than a 50⁄50 split between the two possibilities.
However, LCM only takes place in one of the two possible worlds (the real one); the other never happens. Either way you’re calculating the digit of pi in this world, it’s just that in one of the two possibilities (which, as far as you know are equal) you are the subject of logical counterfactual surgery by Omega. Assuming this is the case, surely calculating the digit of pi isn’t going to help?
From the point of view of your decision-making algorithm, not knowing which of the two inputs it’s actually receiving (counterfactual vs real) it outputs “GIVE”. Moreover, it chooses “GIVE” not merely for counterfactual utilons, but for real ones.
Of course, we’ve assumed that the logical counterfactual surgery Omega does is a coherent concept to begin with. The whole point of Omega is that Omega gets the benefit of the doubt, but in this situation it’s definitely still worthwhile to ask whether it makes sense.
In particular, maybe it’s possible to make a logical counterfactual surgery detector that is robust even against Omega. If you can do that, then you win regardless of which way the logical coin came up. I don’t think trying to calculate the relevant digit of pi is good enough, though.
Here’s an idea for a “logical counterfactual surgery detector”:
Run a sandboxed version of your proof engine that attempts to maximally entangle that digit of pi with other logical facts. For example, it might prove that “if the 10000th decimal digit of pi is 8, then ⊥”. If you detect that the sandboxed proof engine undergoes a logical explosion, then GIVE. Otherwise, REFUSE.