It is a Nash equilibrium for the infinitely repeated PD (take two tit-for-tat opponents—neither have any incentives to deviate from their strategy).
I’m not sure that’s completely right. Infinitely repeated games need a discount factor to keep utilities finite, and the result doesn’t seem to hold if the discount factor is high enough.
I believe the same result holds for one-shot games where you have your opponent’s code.
Yeah, that’s actually another result of mine called freaky fairness ;-) It relies on quining cooperation described here. Maybe I’ll present it in the paper too. LW user benelliott has shown that it’s wrong for multiplayer games, but I believe it still holds for 2-player ones.
Pages 151-152 of multiagent systems http://www.masfoundations.org/ has the proper formulation. But they don’t seem to mention the need for low discount factors...
Your linked result seems to talk about average utilities in the long run, which corresponds to a discount factor of 1. In the general case it seems to me that discount factors can change the outcome. For example, if the benefit of unilaterally defecting instead of cooperating on the first move outweighs the entire future revenue stream, then cooperating on the first move cannot be part of any Nash equilibrium. I found some results saying indefinite cooperation is sustainable if the discount factor is below a certain threshold.
That sounds reasonable. If v is the expected discounted utility at minmax, w the expected discounted utility according to the cooperative strategy, then whenever the gain to defection is less than w-v, we’re fine.
I’m not sure that’s completely right. Infinitely repeated games need a discount factor to keep utilities finite, and the result doesn’t seem to hold if the discount factor is high enough.
Yeah, that’s actually another result of mine called freaky fairness ;-) It relies on quining cooperation described here. Maybe I’ll present it in the paper too. LW user benelliott has shown that it’s wrong for multiplayer games, but I believe it still holds for 2-player ones.
Pages 151-152 of multiagent systems http://www.masfoundations.org/ has the proper formulation. But they don’t seem to mention the need for low discount factors...
Your linked result seems to talk about average utilities in the long run, which corresponds to a discount factor of 1. In the general case it seems to me that discount factors can change the outcome. For example, if the benefit of unilaterally defecting instead of cooperating on the first move outweighs the entire future revenue stream, then cooperating on the first move cannot be part of any Nash equilibrium. I found some results saying indefinite cooperation is sustainable if the discount factor is below a certain threshold.
That sounds reasonable. If v is the expected discounted utility at minmax, w the expected discounted utility according to the cooperative strategy, then whenever the gain to defection is less than w-v, we’re fine.