Chris: Sorry Allan, that you won’t be able to reply. But you did raise the question before bowing out...
I didn’t bow out, I just had a lot of comments made recently. :)
I don’t like the idea that we should cooperate if it cooperates. No, we should defect if it cooperates. There are benefits and no costs to defecting.
But if there are reasons for the other to have habits that are formed by similar forces
In light of what I just wrote, I don’t see that it matters; but anyway, I wouldn’t expect a paperclip maximizer to have habits so ingrained that it can’t ever drop them. Even if it routinely has to make real trade-offs, it’s presumably smart enough to see that—in a one-off interaction—there are no drawbacks to defecting.
Simpleton: No line of causality from one to the other is required.
Yeah, I get your argument now. I think you’re probably right, in that extreme case.
Chris: Sorry Allan, that you won’t be able to reply. But you did raise the question before bowing out...
I didn’t bow out, I just had a lot of comments made recently. :)
I don’t like the idea that we should cooperate if it cooperates. No, we should defect if it cooperates. There are benefits and no costs to defecting.
But if there are reasons for the other to have habits that are formed by similar forces
In light of what I just wrote, I don’t see that it matters; but anyway, I wouldn’t expect a paperclip maximizer to have habits so ingrained that it can’t ever drop them. Even if it routinely has to make real trade-offs, it’s presumably smart enough to see that—in a one-off interaction—there are no drawbacks to defecting.
Simpleton: No line of causality from one to the other is required.
Yeah, I get your argument now. I think you’re probably right, in that extreme case.