Do you believe that actors can not protect themself from blackmail with pre-commitments?
I don’t believe that. If I could prove that, I could also prove the opposite (i.e. replace ‘cannot’ with ‘can always’), because what a decision problem is about is arbitrary. The arbitrariness means any abstract solution has to be symmetric. In example 1, an actor protects themself from blackmail. We can also imagine an inverted example 1, where the more sophisticated conditioner instead represents the blackmailer.
I think that what happens when both agents are advanced enough to fully understand this kind of problem is most similar to example 5. But in reality, they wouldn’t recursively simulate each other forever, because they’d think that would be a waste of resources. They’d have to make some choice eventually. They’d recognize that there is no asymmetric solution to the abstract problem, before making that choice. I don’t know what their choice would be.
I can give a guess, with much less confidence than what I wrote about the logic. Given they’re both maximally advanced, they’d know they’ll perform similar reasoning; it’s similar to the prisoners-dillema-with-clone situation. They could converge to a compromise policy-about-blackmail-in-general for their values in their universe, if there are any such compromises available for their values in their universe. I’m finding it hard to predict what such a ‘compromise’ could be when they’re not on relatively equal footing, though, e.g. when one can blackmail the other, and the other can’t do it back. When they are on equal footing, e.g. have equal incentive to blackmail each other, maybe they would do this: “give each other the things the other wants, in cases where this increases our average value” (which is like normal acausal trade).
After thinking about it more (38 minutes more, compared to when I first posted this comment. I’ve been heavily editing/expanding it), it does feel like a game of ‘mutually’ choosing where-they-end-up-in-the-logical-space, and not one of ‘committing’. Of course, to the extent the decisions are symmetric, they could choose to lock in “I commit to not give in to blackmail, you commit to make and follow through on blackmail”; they just both wouldn’t want that.
I don’t quite know what else there is to do in that situation other than “symmetrically converge to the mid-point”. Even though I dislike where that leads in “unequal” cases like I described two paragraphs up (<the better-situated superintelligence makes half the blackmail, and the worse-situated superintelligence gives in every time>). Logic doesn’t care what I dislike. If this is true, I’ll just have to hope the side of good wins situationally and can prevent this from manifesting in cases it cares about.
Disclaimer: the above is about two superintelligences in isolation, not humans.
I don’t believe that. If I could prove that, I could also prove the opposite (i.e. replace ‘cannot’ with ‘can always’), because what a decision problem is about is arbitrary. The arbitrariness means any abstract solution has to be symmetric. In example 1, an actor protects themself from blackmail. We can also imagine an inverted example 1, where the more sophisticated conditioner instead represents the blackmailer.
I think that what happens when both agents are advanced enough to fully understand this kind of problem is most similar to example 5. But in reality, they wouldn’t recursively simulate each other forever, because they’d think that would be a waste of resources. They’d have to make some choice eventually. They’d recognize that there is no asymmetric solution to the abstract problem, before making that choice. I don’t know what their choice would be.
I can give a guess, with much less confidence than what I wrote about the logic. Given they’re both maximally advanced, they’d know they’ll perform similar reasoning; it’s similar to the prisoners-dillema-with-clone situation. They could converge to a compromise policy-about-blackmail-in-general for their values in their universe, if there are any such compromises available for their values in their universe. I’m finding it hard to predict what such a ‘compromise’ could be when they’re not on relatively equal footing, though, e.g. when one can blackmail the other, and the other can’t do it back. When they are on equal footing, e.g. have equal incentive to blackmail each other, maybe they would do this: “give each other the things the other wants, in cases where this increases our average value” (which is like normal acausal trade).
After thinking about it more (38 minutes more, compared to when I first posted this comment. I’ve been heavily editing/expanding it), it does feel like a game of ‘mutually’ choosing where-they-end-up-in-the-logical-space, and not one of ‘committing’. Of course, to the extent the decisions are symmetric, they could choose to lock in “I commit to not give in to blackmail, you commit to make and follow through on blackmail”; they just both wouldn’t want that.
I don’t quite know what else there is to do in that situation other than “symmetrically converge to the mid-point”. Even though I dislike where that leads in “unequal” cases like I described two paragraphs up (<the better-situated superintelligence makes half the blackmail, and the worse-situated superintelligence gives in every time>). Logic doesn’t care what I dislike. If this is true, I’ll just have to hope the side of good wins situationally and can prevent this from manifesting in cases it cares about.
Disclaimer: the above is about two superintelligences in isolation, not humans.