I am more talking about the broader phenomenon of “simulating other agents adversarially in order to circumvent their predictions”
The idea of “simulating adversarially” might be a bit confusing in the context of diagonalization, since it’s the diagonalization that is adversarial, not the simulation. In particular, you’d want mutual simulation (or rather more abstract reasoning) for coordination. If you merely succeed in acting contrary to a prediction, making the prediction wrong, that’s not diagonalization. What diagonalization does is make the prediction not-happen in the first place (or in the case of putting a credence on something, for the credence to remain at some weaker prior). So diagonalization is something done against a predictor whose prediction is targeted, rather than something done by the predictor. A diagonalizer might itself want to be a predictor, but that is not necessary if the prediction is just given to it.
The idea of “simulating adversarially” might be a bit confusing in the context of diagonalization, since it’s the diagonalization that is adversarial, not the simulation. In particular, you’d want mutual simulation (or rather more abstract reasoning) for coordination. If you merely succeed in acting contrary to a prediction, making the prediction wrong, that’s not diagonalization. What diagonalization does is make the prediction not-happen in the first place (or in the case of putting a credence on something, for the credence to remain at some weaker prior). So diagonalization is something done against a predictor whose prediction is targeted, rather than something done by the predictor. A diagonalizer might itself want to be a predictor, but that is not necessary if the prediction is just given to it.