Assume maximal selfishness: each agent is motivated solely to maximize its own number of children (the agent itself doesn’t get returned!), and doesn’t care about the other agents using the same decision theory, or even about its other “relatives” in the simulation
As I argued here, that is precisely the behaviour you don’t want for your copies/descendants.
As I argued here, that is precisely the behaviour you don’t want for your copies/descendants.