I THINK rational agents will defect 100 times in a row, or 100 million times in a row for this specified problem. But I think this problem is impossible. In all cases there will be uncertainty about your opponent/partner—you won’t know its utility function perfectly, and you won’t know how perfectly it’s implemented. Heck, you don’t know your OWN utility function perfectly, and you know darn well you’re implemented somewhat accidentally. Also, there are few real cases where you know precisely when there will be no further games that can be affected by the current choice.
In cases of uncertainty on these topics, cooperation can be rational. Something on the order of tit-for-tat with an additional chance of defecting or forgiving that’s based on expectation of game ending with this iteration might be right.
I THINK rational agents will defect 100 times in a row, or 100 million times in a row for this specified problem. But I think this problem is impossible. In all cases there will be uncertainty about your opponent/partner—you won’t know its utility function perfectly, and you won’t know how perfectly it’s implemented. Heck, you don’t know your OWN utility function perfectly, and you know darn well you’re implemented somewhat accidentally. Also, there are few real cases where you know precisely when there will be no further games that can be affected by the current choice.
In cases of uncertainty on these topics, cooperation can be rational. Something on the order of tit-for-tat with an additional chance of defecting or forgiving that’s based on expectation of game ending with this iteration might be right.