I think if you’re a rationalist—if you value truth, and coming to truth through the correct procedure—then you should strongly dislike lengthy analogies that depict one’s ideological opponents repeatedly through strawmen / weakman arguments.
I agree in general, but think this particular example is pretty reasonable because the point is general and just happens to be have been triggered by a specific post that 1a3orn thinks is an example of this (presumably this?).
I do think it’s usually better practice to list a bunch of examples of the thing you’re refering to, but also specific examples can sometimes be distracting/unproductive or cause more tribalism than needed? Like in this case I think it would probably be better if people considered this point in abstract (decoupled from implications) and thought about how much they agreed and then after applied this on a case by case basis. (A common tactic that (e.g.) scott alexander uses is to first make an abstract argument before applying it so that people are more likely to properly decouple.)
I have a hard time imagining someone writing this without subtweeting. Feels like classic subtweeting to me, especially “I think this is pretty obvious”. Like, it’s a trivially true point, all the debate is in the applicability/relevance to the situation. I don’t see any point in it except the classic subterfuge of lowering the status of something in a way that’s hard for the thing to defend itself against.
My standard refrain is that open aggression is better than passive aggression. The latter makes it hard to trust things / intentions, and makes people more paranoid and think that people are semi-covertly coordinating to lower their status around them all the time. For instance, and to be clear this is not the current state, but it would not be good for the health of LW for people to regularly see people discussing “obvious” points in shortform and ranting about people not getting them, and later find out it was a criticism of them about a post that they didn’t think would be subject to that criticism!
I agree. I think spending all of one’s time thinking about and arguing with weakman arguments is one of the top reasons why people get set in their ways and stop tracking the truth. I aspire not to do this
I think it would be fair game to try to combat these specifically, especially if you could do it in an engaging way that was more of a memetic match for these sorts of things. And it would be valid from a truthseeking perspective since people swayed by these weak arguments might now see the flaws in them.
But then, you would of course have people upset in the comments that you’re depicting your ideological opponents as strawmen/weakmen, and that there are these much more reasonable arguments X, Y, and Z.
(Similarly, there is often a way in which the weakman is someone’s true reason for believing in something, and the “strongman” is creative sophistry meant to make it more defensible. I also believe in that case that it’s fair to go for the weakmen specifically (e.g. atheism debates are often like this).)
I think trying to win the memetic war and trying to find the truth are fundamentally at odds with each other, so you have to find the right tradeoff. fighting the memetic war actively corrodes your ability to find the truth. this is true even if you constrain yourself to never utter any knowing falsehoods—even just arguing against the bad arguments over and over again calcifies your brain and makes you worse at absorbing new evidence and changing your mind. conversely, committing yourself to finding the truth means you will get destroyed when arguing against people whose only goal is to win arguments.
then you should strongly dislike lengthy analogies that depict one’s ideological opponents repeatedly through strawmen / weakman arguments.
I suspect I know what article inspired this. I am less sure that it was an actual argument, than something like an exhaustive catalog of other people’s annoyingly bad arguments. Had it been prefixed with “[Warning: Venting]” I would have found it unremarkable.
However, there is an annoying complication in certain discussions of AI safety where people argue that AI safety is really easy because of course we’ll all do X. X is typically some thing like “Lock the AI in a box.” Which of course would never work because someone would immediately give the AI full commit privs to production and write a blog post about how they never even read the code. And when you have argued against that plan working, then people propose plan X1, X2, X3, etc, all of which could be outsmarted by a small child. And everyone insists on a personal rebuttal, because their plan is different.
So you wind up with a large catalog of counterarguments to dumb plans. Which looks a lot like dunking on strawmen.
I think if you’re a rationalist—if you value truth, and coming to truth through the correct procedure—then you should strongly dislike lengthy analogies that depict one’s ideological opponents repeatedly through strawmen / weakman arguments.
As a rationalist I also strongly dislike subtweeting
I agree in general, but think this particular example is pretty reasonable because the point is general and just happens to be have been triggered by a specific post that 1a3orn thinks is an example of this (presumably this?).
I do think it’s usually better practice to list a bunch of examples of the thing you’re refering to, but also specific examples can sometimes be distracting/unproductive or cause more tribalism than needed? Like in this case I think it would probably be better if people considered this point in abstract (decoupled from implications) and thought about how much they agreed and then after applied this on a case by case basis. (A common tactic that (e.g.) scott alexander uses is to first make an abstract argument before applying it so that people are more likely to properly decouple.)
I have a hard time imagining someone writing this without subtweeting. Feels like classic subtweeting to me, especially “I think this is pretty obvious”. Like, it’s a trivially true point, all the debate is in the applicability/relevance to the situation. I don’t see any point in it except the classic subterfuge of lowering the status of something in a way that’s hard for the thing to defend itself against.
My standard refrain is that open aggression is better than passive aggression. The latter makes it hard to trust things / intentions, and makes people more paranoid and think that people are semi-covertly coordinating to lower their status around them all the time. For instance, and to be clear this is not the current state, but it would not be good for the health of LW for people to regularly see people discussing “obvious” points in shortform and ranting about people not getting them, and later find out it was a criticism of them about a post that they didn’t think would be subject to that criticism!
Thing likely being subtweeted: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius
1a3orn can correct me if I’m wrong. You’re welcome, confused future readers.
I agree. I think spending all of one’s time thinking about and arguing with weakman arguments is one of the top reasons why people get set in their ways and stop tracking the truth. I aspire not to do this
Sometimes the “weakmen” are among the most memetically fit things in the space, even if you could also point much smarter arguments on the same ideological side. For example, I took a quick sample of reddit attitudes about current AI capabilities here: https://www.lesswrong.com/posts/W2dTrfTsGtFiwG5hM/origins-and-dangers-of-future-ai-capability-denial?commentId=R54z6dNqs2JpALRYe
I think it would be fair game to try to combat these specifically, especially if you could do it in an engaging way that was more of a memetic match for these sorts of things. And it would be valid from a truthseeking perspective since people swayed by these weak arguments might now see the flaws in them.
But then, you would of course have people upset in the comments that you’re depicting your ideological opponents as strawmen/weakmen, and that there are these much more reasonable arguments X, Y, and Z.
(Similarly, there is often a way in which the weakman is someone’s true reason for believing in something, and the “strongman” is creative sophistry meant to make it more defensible. I also believe in that case that it’s fair to go for the weakmen specifically (e.g. atheism debates are often like this).)
I think trying to win the memetic war and trying to find the truth are fundamentally at odds with each other, so you have to find the right tradeoff. fighting the memetic war actively corrodes your ability to find the truth. this is true even if you constrain yourself to never utter any knowing falsehoods—even just arguing against the bad arguments over and over again calcifies your brain and makes you worse at absorbing new evidence and changing your mind. conversely, committing yourself to finding the truth means you will get destroyed when arguing against people whose only goal is to win arguments.
I suspect I know what article inspired this. I am less sure that it was an actual argument, than something like an exhaustive catalog of other people’s annoyingly bad arguments. Had it been prefixed with “[Warning: Venting]” I would have found it unremarkable.
However, there is an annoying complication in certain discussions of AI safety where people argue that AI safety is really easy because of course we’ll all do X. X is typically some thing like “Lock the AI in a box.” Which of course would never work because someone would immediately give the AI full commit privs to production and write a blog post about how they never even read the code. And when you have argued against that plan working, then people propose plan X1, X2, X3, etc, all of which could be outsmarted by a small child. And everyone insists on a personal rebuttal, because their plan is different.
So you wind up with a large catalog of counterarguments to dumb plans. Which looks a lot like dunking on strawmen.
There are no rationalists in an ideological disagreement.