Important note about Aumann’s agreement theorem: both agents have to have the same priors. With human beings this isn’t always the case, especially when it comes to values. But even with perfect Bayesian reasoners it isn’t always the case, since their model of the world is their prior. Two Bayesians with the same data can disagree if they are reasoning from different causal models.
Now with infinite data, abandonment of poor performing models, and an Occam prior it is much more likely that they will agree. But not mathematically guaranteed AFAIK.
It’s a good heuristic in practice. But don’t draw strong conclusions from it without corroborating evidence.
The usual formalization of “Occam’s prior” is the Solomonoff prior, which still depends on the choice of a Universal Turing Machine, so such agents can still disagree because of different priors.
Important note about Aumann’s agreement theorem: both agents have to have the same priors. With human beings this isn’t always the case, especially when it comes to values. But even with perfect Bayesian reasoners it isn’t always the case, since their model of the world is their prior. Two Bayesians with the same data can disagree if they are reasoning from different causal models.
Now with infinite data, abandonment of poor performing models, and an Occam prior it is much more likely that they will agree. But not mathematically guaranteed AFAIK.
It’s a good heuristic in practice. But don’t draw strong conclusions from it without corroborating evidence.
The usual formalization of “Occam’s prior” is the Solomonoff prior, which still depends on the choice of a Universal Turing Machine, so such agents can still disagree because of different priors.