A dramatic understatement—I found this to be far superior to WAitW, as it’s more concrete and offers reasonable advice to its readers. By being more systematic, it strikes me as a better illustration of WAitA than the actual WAitA article.
That you for linking to this article; I enjoy seeing other people making my point better than I could. Here are some additional thoughts:
My first thought after reading the linked article was “pick your battles”, especially as expressed in Paul Graham’s “What You Can’t Say”. It sounds like the exact opposite of “Atheism+”, and yet they both seem to make a lot of sense… well, how is that possible?
More generally: You hold opinions X and Y, and they are both important to you. Another person agrees with X and disagrees with Y. When should you treat this person as an ally, and when should you treat them as an enemy? Let’s suppose that both X and Y are your core values, so you can’t decide by “which one is more important to you”.
Seems to me that when X is endangered, then each proponent of X is a gift, and you don’t look the gift horse in the mouth. On the other hand, when X is safe—it may be still a minority belief, but it gains momentum irreversibly—it is strategic to associate X with Y as much as possible, to transfer some momentum to Y; declaring “X but not Y” people as the “enemies of the true X (which includes Y)” is the obvious way to do it. You can afford to alienate the few “X but not Y” people if X will win without them too.
This was a strategic analysis in general, but now let’s look at these specific X and Y; namely: What’s could be possibly wrong about asking people to be compassionate and reasonable, and not be a bully???
Well, it depends on your specific definitions or “compassionate”, “reasonable” and “bully”. Yes, the devil is in the details. As long as the vocal people are allowed to redefine these words to mean exactly what they need them to mean in a given moment, and especially if “surely, you said A, but we all know that’s just a code for B” arguments are accepted, it allows to relabel each dissent as bullying, and to ostracize given people not because they disagree, but simply because their behavior is interpreted as ethically unacceptable. (Even pointing out this mechanism will be problematic, because we all know that only a bully would expend their energy to defend bullies.)
If you haven’t read that article yet, you probably should, since the rest of this post will make very little sense without it.
Actually, although I can’t perfectly emulate a person who hasn’t read that post, I believe that this one can be understood just fine without it. The content of this post could be summed up with “the things you associate with this particular -ism aren’t going to be the same things that everyone associate with this -ism”, which can be easily understood even without knowing what the WAitW is.
I actually suspect that you shouldn’t even mention the WAitW, because the WAitW implies that the other person is employing motivated cognition and throwing any poor argument they can come up with at you in order to shut you up. In contrast, here they might just genuinely have a different mapping between verbal and conceptual space than you do.
I believe the correct term is “straw individual” by Yvain is well worth reading.
A dramatic understatement—I found this to be far superior to WAitW, as it’s more concrete and offers reasonable advice to its readers. By being more systematic, it strikes me as a better illustration of WAitA than the actual WAitA article.
WAitw = “Worst argument on the world” yes? The acronym is unclear.
Yes.
That you for linking to this article; I enjoy seeing other people making my point better than I could. Here are some additional thoughts:
My first thought after reading the linked article was “pick your battles”, especially as expressed in Paul Graham’s “What You Can’t Say”. It sounds like the exact opposite of “Atheism+”, and yet they both seem to make a lot of sense… well, how is that possible?
More generally: You hold opinions X and Y, and they are both important to you. Another person agrees with X and disagrees with Y. When should you treat this person as an ally, and when should you treat them as an enemy? Let’s suppose that both X and Y are your core values, so you can’t decide by “which one is more important to you”.
Seems to me that when X is endangered, then each proponent of X is a gift, and you don’t look the gift horse in the mouth. On the other hand, when X is safe—it may be still a minority belief, but it gains momentum irreversibly—it is strategic to associate X with Y as much as possible, to transfer some momentum to Y; declaring “X but not Y” people as the “enemies of the true X (which includes Y)” is the obvious way to do it. You can afford to alienate the few “X but not Y” people if X will win without them too.
This was a strategic analysis in general, but now let’s look at these specific X and Y; namely: What’s could be possibly wrong about asking people to be compassionate and reasonable, and not be a bully???
Well, it depends on your specific definitions or “compassionate”, “reasonable” and “bully”. Yes, the devil is in the details. As long as the vocal people are allowed to redefine these words to mean exactly what they need them to mean in a given moment, and especially if “surely, you said A, but we all know that’s just a code for B” arguments are accepted, it allows to relabel each dissent as bullying, and to ostracize given people not because they disagree, but simply because their behavior is interpreted as ethically unacceptable. (Even pointing out this mechanism will be problematic, because we all know that only a bully would expend their energy to defend bullies.)
Also this comment by Kaj_Sotala:
Has it occurred to anyone that worst-argument-in-the-world type thinking is probably a result of the affect heuristic?