For fun, I tried this out with Deepseek today. First went a single round (Deepseek defected, as did I). Then I prompted it with a 10-round game, which we completed one by one—I had my choices prepared before each round, and asked Deepseek to tell its choice first so as not to influence it otherwise.
I cooperated during the first and fifth rounds, and Deepseek defected each time. When I asked it to elaborate its strategy, Deepseek replied that it was not aware whether it could trust me, so it thought the safest course of action was to defect each time. It also immediately thanked me and said that it would correct its strategy to be more cooperative in the future, although I didn’t ask it to.
Naturally I didn’t elaborate the weights properly (using the words “small loss”, “moderate loss”, “substantial loss” and “no loss”) and this went only for ten rounds. But it was fun.
A lovely piece of prose—thanks.