I agree with everything you say there. Is this intended as disagreement with a specific claim I made? I’m just a little confused what you’re trying to convey.
If you agree with everything he said, then you don’t think rational agents cooperate on this dilemma in any plausible real-world scenario, right? Even superintelligent agents aren’t going to have full and certain knowledge of each other.
No? Like I explained in the post, cooperation doesn’t require certainty, just that the expected value of cooperation is higher than that of defection. With the standard payoffs, rational agents cooperate as long as they assign greater than 75% credence to the other player making the same decision as they do.
Not really disagreeing with anything specific, just pointing out what I think is a common failure mode where people first learn of better decision theories than CDT, say, “aha, now I’ll cooperate in the prisoner’s dilemma!” and then get defected on. There’s still some additional cognitive work required to actually implement a decision theory yourself, which is distinct from both understanding that decision theory, and wanting to implement it. Not claiming you yourself don’t already understand all this, but I think it’s important as a disclaimer in any piece intended to introduce people previously unfamiliar with decision theories.
Ah, I see. That’s what I was trying to get at with the probabilistic case of “you should still cooperate as long as there’s at least a 75% chance the other person reasons the same way you do”, and the real-world examples at the end, but I’ll try to make that more explicit. Becoming cooperate-bot is definitely not rational!
I agree with everything you say there. Is this intended as disagreement with a specific claim I made? I’m just a little confused what you’re trying to convey.
If you agree with everything he said, then you don’t think rational agents cooperate on this dilemma in any plausible real-world scenario, right? Even superintelligent agents aren’t going to have full and certain knowledge of each other.
No? Like I explained in the post, cooperation doesn’t require certainty, just that the expected value of cooperation is higher than that of defection. With the standard payoffs, rational agents cooperate as long as they assign greater than 75% credence to the other player making the same decision as they do.
Not really disagreeing with anything specific, just pointing out what I think is a common failure mode where people first learn of better decision theories than CDT, say, “aha, now I’ll cooperate in the prisoner’s dilemma!” and then get defected on. There’s still some additional cognitive work required to actually implement a decision theory yourself, which is distinct from both understanding that decision theory, and wanting to implement it. Not claiming you yourself don’t already understand all this, but I think it’s important as a disclaimer in any piece intended to introduce people previously unfamiliar with decision theories.
Ah, I see. That’s what I was trying to get at with the probabilistic case of “you should still cooperate as long as there’s at least a 75% chance the other person reasons the same way you do”, and the real-world examples at the end, but I’ll try to make that more explicit. Becoming cooperate-bot is definitely not rational!