IDK how this plays in to things, but convergence seems relevant. It’s not clear what “similar” means here. Many kinds of difference wash out. E.g. Solomonoff inductors with different priors will converge (if the universe is computable). Proof-based cooperation is also sort of robust. Like, in real life, states don’t reason about other states by any sort of detailed similarity, but rather assume that the other state wants certain convergent goals such as not being consumed in a ball of fire. It’s weird because that argues that maybe cooperation is robust; but also it feels like there’s good reasons for cooperation to be fragile. E.g. the argument you give. Or more empirically, ambiguity about whether you need to prepare for adversarial situations makes you prepare for adversarial situations, which creates ambiguity about whether the other needs to do likewise.
IDK how this plays in to things, but convergence seems relevant. It’s not clear what “similar” means here. Many kinds of difference wash out. E.g. Solomonoff inductors with different priors will converge (if the universe is computable). Proof-based cooperation is also sort of robust. Like, in real life, states don’t reason about other states by any sort of detailed similarity, but rather assume that the other state wants certain convergent goals such as not being consumed in a ball of fire. It’s weird because that argues that maybe cooperation is robust; but also it feels like there’s good reasons for cooperation to be fragile. E.g. the argument you give. Or more empirically, ambiguity about whether you need to prepare for adversarial situations makes you prepare for adversarial situations, which creates ambiguity about whether the other needs to do likewise.