A deeper dynamic that I think sometimes plays a role is Aumann’s agreement theorem. If person X says A and person Y says not-A, then “clearly” only one of them can be right, and so the fact that there’s a persistent disagreement suggests that there’s something wrong with one of them. This may be upsetting to X and Y (especially the more-identifiable individual of the pair, e.g. the more public person or similar) because it reduces their reputation.
Superficially, the solution may often just be that A holds in some cases and not-A holds in other cases, which can be used to resolve the conflict by having each person mention the case that they are applying it in, and then mutually agreeing that A holds in X’s case while not-A holds in Y’s case. Aumann agreement achieved!
However, the fact that there are people who have Strong Opinions suggests that their opinions are part of some conflict or something, as otherwise they don’t necessarily have much reason to care what other’s think. And that means that if one of the cases are mentioned, then it brings up that conflict.
And that’s not necessarily so straightforward, because if Y has a case where not-A holds, then X might have trouble acknowledging that not-A might hold in Y’s conflict, because that would mean taking a side in Y’s conflict, and therefore implicitly taking a side against the person Z who Y is in conflict with.
The straightforward answer is just for person X to say that they don’t know anything about Z’s side of the story in the conflict. In principle that should be fine as a solution (though in more complex scenarios there might be some more complicated things going on, e.g. information cascades).
It seems to apply strongly enough that OP is dissatisfied with dynamics like:
But unfortunately, their misinterpretations can anchor others and skew the conversation, and a dangling unanswered “Cite specific examples?” comment accrues upvotes pretty quickly, and generates oft-undeserved skepticism through sheer representativeness.
… where a person has a huge effects on people’s beliefs just by saying a few things.
(I often get frustrated at the “pop culture” understanding of Aumann, which is about as wrong as the pop culture understanding of Dunning-Kruger or the pop culture understanding of Freud. I agree the above is about the pop culture understanding of Aumann.)
By-default, people would Aumann-agree towards the original post. However, if someone raises doubt, they may Aumann-agree that doubts are plausible, which un-updates them from the original post.
I guess I should add:
A deeper dynamic that I think sometimes plays a role is Aumann’s agreement theorem. If person X says A and person Y says not-A, then “clearly” only one of them can be right, and so the fact that there’s a persistent disagreement suggests that there’s something wrong with one of them. This may be upsetting to X and Y (especially the more-identifiable individual of the pair, e.g. the more public person or similar) because it reduces their reputation.
Superficially, the solution may often just be that A holds in some cases and not-A holds in other cases, which can be used to resolve the conflict by having each person mention the case that they are applying it in, and then mutually agreeing that A holds in X’s case while not-A holds in Y’s case. Aumann agreement achieved!
However, the fact that there are people who have Strong Opinions suggests that their opinions are part of some conflict or something, as otherwise they don’t necessarily have much reason to care what other’s think. And that means that if one of the cases are mentioned, then it brings up that conflict.
And that’s not necessarily so straightforward, because if Y has a case where not-A holds, then X might have trouble acknowledging that not-A might hold in Y’s conflict, because that would mean taking a side in Y’s conflict, and therefore implicitly taking a side against the person Z who Y is in conflict with.
The straightforward answer is just for person X to say that they don’t know anything about Z’s side of the story in the conflict. In principle that should be fine as a solution (though in more complex scenarios there might be some more complicated things going on, e.g. information cascades).
I believe that Aumann agreement doesn’t apply to humans because, among other things, we do not have common priors.
It seems to apply strongly enough that OP is dissatisfied with dynamics like:
… where a person has a huge effects on people’s beliefs just by saying a few things.
That’s … not [really/quite] about Aumann.
(I often get frustrated at the “pop culture” understanding of Aumann, which is about as wrong as the pop culture understanding of Dunning-Kruger or the pop culture understanding of Freud. I agree the above is about the pop culture understanding of Aumann.)
The way I interpret it as being about Aumann:
By-default, people would Aumann-agree towards the original post. However, if someone raises doubt, they may Aumann-agree that doubts are plausible, which un-updates them from the original post.