I agree that there’s a thing going on with “the evidence is so strong that I update significantly even if it is coming from someone’s motivated cognition”, but I think there’s also something more general going on which has to do with gears-level.
If we were perfect Bayesians, then there would be no distinction between “the evidence that made us believe” and “all the evidence we have”. However, we are not perfect bayesians, and logical induction captures some of what’s going on with our bounded rationality.
According to my analysis, gears are parts of our model which are bayesian in that way; we can put weight on them based on all the evidence for and against them, because the models are “precise” in a way which allows us to objectively judge how the evidence bears on them. (Other parts of our beliefs can’t be judged in this way due to the difficulty of overcoming hindsight bias.)
Therefore, filtering our state of evidence through gears-level models allows us to convey evidence which would have moved us if we were more perfectly Bayesian. We are speaking from a model, describing an epistemic state which isn’t actually our own but which is more communicable.
This is all deliciously meta because this in itself is an example of me having some strong intuitions and attempting to put them into a gears-level model to communicate them well. I think there’s a bigger picture which I’m not completely conveying, which has to do with logical induction, bounded rationality, aumann agreement, justification structures, hindsight bias, motivated cognition, and intuitions from the belief propagation algorithm.