if you expect that something will cause a change in your beliefs when it shouldn’t
This seems like a breakdown in reflective consistency. Shouldn’t you try to actively counter/avoid the expected irrationality pressure, instead of (irrationally and meekly) waiting for it to nudge your mind in a wrong direction? Is there a specific example that prompted your comment? I can think of some cases offhand. Say, you work at a failing company and you are required to attend an all-hands pep talk by the CEO, who wants to keep the employee morale up. There are multiple ways to avoid being swayed by rhetoric: not listening, writing down possible arguments and counter arguments in advance, listing the likely biases and fallacies the speaker will play on and making a point of identifying and writing them down in real time, etc.
No specific examples originally, but Yvain had a nice discussion about persuasive crackpot theories in his old blog (now friends-locked, but I think that sharing the below excerpt is okay), which seems like a good example:
When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology rather than the universally reviled crackpots who write books about Venus being a comet.
I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don’t even try.
As for trying to actively counter the effect of the misleading rhetoric, one can certainly try, but they should also keep in mind that we’re generally quite bad at this. E.g. while not exactly the same thing, this bit from Misinformation and its Correction seems relevant:
A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011).
This seems like a breakdown in reflective consistency. Shouldn’t you try to actively counter/avoid the expected irrationality pressure, instead of (irrationally and meekly) waiting for it to nudge your mind in a wrong direction? Is there a specific example that prompted your comment? I can think of some cases offhand. Say, you work at a failing company and you are required to attend an all-hands pep talk by the CEO, who wants to keep the employee morale up. There are multiple ways to avoid being swayed by rhetoric: not listening, writing down possible arguments and counter arguments in advance, listing the likely biases and fallacies the speaker will play on and making a point of identifying and writing them down in real time, etc.
No specific examples originally, but Yvain had a nice discussion about persuasive crackpot theories in his old blog (now friends-locked, but I think that sharing the below excerpt is okay), which seems like a good example:
As for trying to actively counter the effect of the misleading rhetoric, one can certainly try, but they should also keep in mind that we’re generally quite bad at this. E.g. while not exactly the same thing, this bit from Misinformation and its Correction seems relevant:
Sure, you should try to counter. But sometimes the costs of doing that are higher than the losses that will result from an incorrect belief.