I often complain about this type of reasoning too, but perhaps there is a steelman version of it.
For example, suppose the lock on my front door is broken, and I hear a rumour that a neighbour has been sneaking into my house at night. It turns out the rumour is false, but I might reasonably think, “The fact that this is so plausible is a wake-up call. I really need to change that lock!”
Generalising this: a plausible-but-false rumour can fail to provide empirical evidence for something, but still provide ‘logical evidence’ by alerting you to something that is already plausible in your model but that you hadn’t specifically thought about. Ideal Bayesian reasoners don’t need to be alerted to what they already fiind plausible, but humans sometimes do.
i think you’re mis-applying the moral of this comic. the intended reading IMO is “a person believes misinformation, and perhaps they even go around spreading the misinformation to others. when they’ve been credibly corrected, instead of scrutinizing their whole ideology, they go ‘yeah but something like it is probably true enough’.” OP doesn’t point to any names or say “this is definitely happening”, they’re speculating about a scenario which may have already happened or may happen soon, and what we should do about it.
OP’s situation: There is a plausible bad thing, and there’s a rumor that the bad thing is happening, and the rumor may or may not be true.
Comic situation: There is a plausible bad thing, and there’s evidence of the bad thing and oops turns out the evidence is false.
Like, if you’re concerned about something and you get weak positive evidence, that’s not the same as being concerned about something and then getting strong negative evidence.
I often complain about this type of reasoning too, but perhaps there is a steelman version of it.
For example, suppose the lock on my front door is broken, and I hear a rumour that a neighbour has been sneaking into my house at night. It turns out the rumour is false, but I might reasonably think, “The fact that this is so plausible is a wake-up call. I really need to change that lock!”
Generalising this: a plausible-but-false rumour can fail to provide empirical evidence for something, but still provide ‘logical evidence’ by alerting you to something that is already plausible in your model but that you hadn’t specifically thought about. Ideal Bayesian reasoners don’t need to be alerted to what they already fiind plausible, but humans sometimes do.
i think you’re mis-applying the moral of this comic. the intended reading IMO is “a person believes misinformation, and perhaps they even go around spreading the misinformation to others. when they’ve been credibly corrected, instead of scrutinizing their whole ideology, they go ‘yeah but something like it is probably true enough’.” OP doesn’t point to any names or say “this is definitely happening”, they’re speculating about a scenario which may have already happened or may happen soon, and what we should do about it.
I think this is not analogous:
OP’s situation: There is a plausible bad thing, and there’s a rumor that the bad thing is happening, and the rumor may or may not be true.
Comic situation: There is a plausible bad thing, and there’s evidence of the bad thing and oops turns out the evidence is false.
Like, if you’re concerned about something and you get weak positive evidence, that’s not the same as being concerned about something and then getting strong negative evidence.