I don’t know of any research that’s this direct. Well, that’s not true—https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem is pretty famous, so if the “bayesean agent” and “experts” have similar priors and mutual knowledge of their rationality, the updates (in both directions) are pretty straightforward.
But when you say “research”, it seems like you’re talking about humans, and there’s not a bayesean agent among us. Neither the person in question, nor the experts, have any clue what their priors are or what evidence they’re updating on.
You can still use some amount of Bayes-inspired logic in your updates. “update based on your level of surprise” is pretty solid in many cases. The main problem I see is selection bias. Which experts are actually sharing such statements, and how do you weight your surprise at different ‘expert’ pronouncements.
I don’t know of any research that’s this direct. Well, that’s not true—https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem is pretty famous, so if the “bayesean agent” and “experts” have similar priors and mutual knowledge of their rationality, the updates (in both directions) are pretty straightforward.
But when you say “research”, it seems like you’re talking about humans, and there’s not a bayesean agent among us. Neither the person in question, nor the experts, have any clue what their priors are or what evidence they’re updating on.
You can still use some amount of Bayes-inspired logic in your updates. “update based on your level of surprise” is pretty solid in many cases. The main problem I see is selection bias. Which experts are actually sharing such statements, and how do you weight your surprise at different ‘expert’ pronouncements.