If you agree with everything he said, then you don’t think rational agents cooperate on this dilemma in any plausible real-world scenario, right? Even superintelligent agents aren’t going to have full and certain knowledge of each other.
No? Like I explained in the post, cooperation doesn’t require certainty, just that the expected value of cooperation is higher than that of defection. With the standard payoffs, rational agents cooperate as long as they assign greater than 75% credence to the other player making the same decision as they do.
If you agree with everything he said, then you don’t think rational agents cooperate on this dilemma in any plausible real-world scenario, right? Even superintelligent agents aren’t going to have full and certain knowledge of each other.
No? Like I explained in the post, cooperation doesn’t require certainty, just that the expected value of cooperation is higher than that of defection. With the standard payoffs, rational agents cooperate as long as they assign greater than 75% credence to the other player making the same decision as they do.