I shouldn’t have conflated perfectly rational agents (if there are such things) with classical game-theorists. Presumably, a perfectly rational agent could make this move for precisely this reason.
Probably the best situation would be if we were so transparently naive that the maximizer could actually verify that we were playing naive tit-for-tat, including on the last round. That way, it would cooperate for 99 rounds. But with it in another universe, I don’t see how it can verify anything of the sort.
(By the way, Eliezer, how much communication is going on between us and Clippy? In the iterated dilemma’s purest form, the only communications are the moves themselves—is that what we are to assume here?)
Carl—good point.
I shouldn’t have conflated perfectly rational agents (if there are such things) with classical game-theorists. Presumably, a perfectly rational agent could make this move for precisely this reason.
Probably the best situation would be if we were so transparently naive that the maximizer could actually verify that we were playing naive tit-for-tat, including on the last round. That way, it would cooperate for 99 rounds. But with it in another universe, I don’t see how it can verify anything of the sort.
(By the way, Eliezer, how much communication is going on between us and Clippy? In the iterated dilemma’s purest form, the only communications are the moves themselves—is that what we are to assume here?)