This is a rule that tells us not to draw the diagram with a physical node being determined by the mathematical fact D xor E, but rather to have a physical node determined by D, and then a physical descendent D xor E...
When I evaluate this proposed solution for ad-hoc-ness, it does admittedly look a bit ad-hoc, but it solves at least one other problem than the one I started with, and which I didn’t think of until now. Suppose Omega tells me that I make the same decision in the Prisoner’s Dilemma as Agent X. This does not necessarily imply that I should cooperate with Agent X. X and I could have made the same decision for different (uncorrelated) reasons, and Omega could have simply found out by simulating the two of us that X and I gave the same response. In this case, presumably defecting; but if I cooperated, X wouldn’t do anything differently. X is just a piece of paper with “Defect” written on it.
If X isn’t like us, we can’t “control” X by making a decision similar to what we would want X to output*. We shouldn’t go from being an agent that defects in the prisoner’s dilemma with Agent X when told we “make the same decision in the Prisoner’s Dilemma as Agent X” to being one that does not defect, just as we do not unilaterally switch from natural to precision bidding when in contract bridge a partner opens with two clubs (which signals a good hand under precision bidding, and not under natural bidding).
However, there do exist agents who should cooperate every time they hear they “make the same decision in the Prisoner’s Dilemma as Agent X”, those who have committed to cooperate in such cases. In some such cases, they are up against pieces of paper on which “cooperate” is written (too bad they didn’t have a more discriminating algorithm/clear Omega), in others, they are up against copies of themselves or other agents whose output depends on what Omega tells them. In any case, many agents should cooperate when they hear that.
Yes? No?
Why shouldn’t one be such an agent? Do we know ahead of time that we are likely to be up against pieces of paper with “cooperate” on them, and Omega would tell unhelpfully tell us we “make the same decision in the Prisoner’s Dilemma as Agent X” in all such cases, though if we had a different strategy we could have gotten useful information and defected in that case?
*Other cases include us defecting to get X to cooperate, and others where X’s play depends on ours, but this is the natural case to use when considering if the Agent X’s action depends on ours, a not strategically incompetent Agent X that has a strategy at least as good as always defecting or cooperating and does not try to condition his cooperating on our defecting or the like.
If X isn’t like us, we can’t “control” X by making a decision similar to what we would want X to output*. We shouldn’t go from being an agent that defects in the prisoner’s dilemma with Agent X when told we “make the same decision in the Prisoner’s Dilemma as Agent X” to being one that does not defect, just as we do not unilaterally switch from natural to precision bidding when in contract bridge a partner opens with two clubs (which signals a good hand under precision bidding, and not under natural bidding).
However, there do exist agents who should cooperate every time they hear they “make the same decision in the Prisoner’s Dilemma as Agent X”, those who have committed to cooperate in such cases. In some such cases, they are up against pieces of paper on which “cooperate” is written (too bad they didn’t have a more discriminating algorithm/clear Omega), in others, they are up against copies of themselves or other agents whose output depends on what Omega tells them. In any case, many agents should cooperate when they hear that.
Yes? No?
Why shouldn’t one be such an agent? Do we know ahead of time that we are likely to be up against pieces of paper with “cooperate” on them, and Omega would tell unhelpfully tell us we “make the same decision in the Prisoner’s Dilemma as Agent X” in all such cases, though if we had a different strategy we could have gotten useful information and defected in that case?
*Other cases include us defecting to get X to cooperate, and others where X’s play depends on ours, but this is the natural case to use when considering if the Agent X’s action depends on ours, a not strategically incompetent Agent X that has a strategy at least as good as always defecting or cooperating and does not try to condition his cooperating on our defecting or the like.