Most investigations into this question end up proving something called Aumann’s Agreement Theorem, which roughly speaking states that if the different agents correctly trust each other then they will end up agreeing with each other. Maybe there’s some types of knowledge differences which prevent this once one deviates from ideal Bayesianism, but if so it is not known what they are.
Most investigations into this question end up proving something called Aumann’s Agreement Theorem, which roughly speaking states that if the different agents correctly trust each other then they will end up agreeing with each other. Maybe there’s some types of knowledge differences which prevent this once one deviates from ideal Bayesianism, but if so it is not known what they are.