We think of Aumann updating as updating upward if the other person’s probability is higher than you thought it would be, or updating downward if the other person’s probability is lower than you thought it would be. But sometimes it’s the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it’s OD you might learn it’s OD or you might not. A and B are given an urn and are trying to find out whether it’s red. It’s OD, which A knows but B doesn’t. They both draw a few balls. Then A knows if B draws red balls, B (not knowing it’s OD) will estimate a high probability for red and therefore A (knowing it’s OD) should estimate a low probability for red, and vice versa. So this is a sense in which intelligence can be inverted misguidedness.
Another thought: suppose in the above example, there’s a small chance (let’s say equal to the chance that it’s OD) that A is insane and will behave as if always knowing for sure it’s OD. Then if we’re back in the case where it actually is OD and A is sane, the estimates of A and B will remain substantially different forever. So taking this as an example it seems like even tiny failures of common knowledge of rationality can (in correspondingly improbable cases) cause big persistent disagreements between rational agents.
Is the reasoning here correct? Are the examples important in practice?
We think of Aumann updating as updating upward if the other person’s probability is higher than you thought it would be, or updating downward if the other person’s probability is lower than you thought it would be. But sometimes it’s the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it’s OD you might learn it’s OD or you might not. A and B are given an urn and are trying to find out whether it’s red. It’s OD, which A knows but B doesn’t. They both draw a few balls. Then A knows if B draws red balls, B (not knowing it’s OD) will estimate a high probability for red and therefore A (knowing it’s OD) should estimate a low probability for red, and vice versa. So this is a sense in which intelligence can be inverted misguidedness.
Another thought: suppose in the above example, there’s a small chance (let’s say equal to the chance that it’s OD) that A is insane and will behave as if always knowing for sure it’s OD. Then if we’re back in the case where it actually is OD and A is sane, the estimates of A and B will remain substantially different forever. So taking this as an example it seems like even tiny failures of common knowledge of rationality can (in correspondingly improbable cases) cause big persistent disagreements between rational agents.
Is the reasoning here correct? Are the examples important in practice?