IDK how this plays in to things, but convergence seems relevant. It’s not clear what “similar” means here. Many kinds of difference wash out. E.g. Solomonoff inductors with different priors will converge (if the universe is computable). Proof-based cooperation is also sort of robust. Like, in real life, states don’t reason about other states by any sort of detailed similarity, but rather assume that the other state wants certain convergent goals such as not being consumed in a ball of fire. It’s weird because that argues that maybe cooperation is robust; but also it feels like there’s good reasons for cooperation to be fragile. E.g. the argument you give. Or more empirically, ambiguity about whether you need to prepare for adversarial situations makes you prepare for adversarial situations, which creates ambiguity about whether the other needs to do likewise.
People simulate other people in their minds. They don’t need to think ’they are similar to me”. Simulating them in a way that is close to how they think themselves may be enough.
IDK how this plays in to things, but convergence seems relevant. It’s not clear what “similar” means here. Many kinds of difference wash out. E.g. Solomonoff inductors with different priors will converge (if the universe is computable). Proof-based cooperation is also sort of robust. Like, in real life, states don’t reason about other states by any sort of detailed similarity, but rather assume that the other state wants certain convergent goals such as not being consumed in a ball of fire. It’s weird because that argues that maybe cooperation is robust; but also it feels like there’s good reasons for cooperation to be fragile. E.g. the argument you give. Or more empirically, ambiguity about whether you need to prepare for adversarial situations makes you prepare for adversarial situations, which creates ambiguity about whether the other needs to do likewise.
Newcomb-like problems are common in real life. This seems to suggest that a high fidelity is not needed.
I was surprised when I learned that Newcomb-like problems are common in real life.
It seems reasonable to compare a decision theory that tries to solve these to whatever intuitive means people use to solve them.
People simulate other people in their minds. They don’t need to think ’they are similar to me”. Simulating them in a way that is close to how they think themselves may be enough.