The post linked was in part a response to a comment of yours on my last post.
And this shows up a lot in the political examples, and the big issue I’ve noticed in political discourse is everyone goes towards the weakest arguments on the other side, and don’t steelman their opponents (this is in combination with another issue that a lot of people are trying to smuggle in moral claims based on the factual claims, as well as trying to use the factual claims to normalize hurting/killing people on the other side because lots of people simply want to hurt/kill other people, and are bottlenecked by logistics plus opposition).
This is one of my leading theories on how political discussions go wrong nowadays.
Another example here is the orthogonality thesis and instrumental convergence back in 2006-2008 tried to debunk bad arguments from AI optimists at the time, and one of the crucial mistakes that I think doomed MIRI towards having unreasonable (in my eyes) confidence about the hardness of the AI safety problem is the fact that they kept engaging with bad critics instead of trying to invent imaginary steelmans of the AI optimist position (I also think the AI optimists have done this to a lesser extent) (though to be fair we knew a lot less about AI back in 2006-2008).
This is also why empirical evidence is usually far more valuable than arguments, as it cuts out the selection effects that can be a massive problem, and is undoubtably a better critic than anyone will likely generate (except in certain fields).
This is also why I think the recent push to make AI safety have traction amongst the general public by creating a movement is a mistake.
So one of the key things LWers should be expected to do is be able to steelman beliefs (that aren’t moral beliefs) that they think are wrong, and to always focus on the best arguments/evidence.
A big part of the problem, in a sense is that the discussion is usually focused on dunking on bad arguments.
One of the takeaways of the history of science/progress is that in general, you should pretty much ignore bad arguments against an idea, and most importantly not update towards towards your idea being correct.
The post linked was in part a response to a comment of yours on my last post.
And this shows up a lot in the political examples, and the big issue I’ve noticed in political discourse is everyone goes towards the weakest arguments on the other side, and don’t steelman their opponents (this is in combination with another issue that a lot of people are trying to smuggle in moral claims based on the factual claims, as well as trying to use the factual claims to normalize hurting/killing people on the other side because lots of people simply want to hurt/kill other people, and are bottlenecked by logistics plus opposition).
This is one of my leading theories on how political discussions go wrong nowadays.
Another example here is the orthogonality thesis and instrumental convergence back in 2006-2008 tried to debunk bad arguments from AI optimists at the time, and one of the crucial mistakes that I think doomed MIRI towards having unreasonable (in my eyes) confidence about the hardness of the AI safety problem is the fact that they kept engaging with bad critics instead of trying to invent imaginary steelmans of the AI optimist position (I also think the AI optimists have done this to a lesser extent) (though to be fair we knew a lot less about AI back in 2006-2008).
This is also why empirical evidence is usually far more valuable than arguments, as it cuts out the selection effects that can be a massive problem, and is undoubtably a better critic than anyone will likely generate (except in certain fields).
This is also why I think the recent push to make AI safety have traction amongst the general public by creating a movement is a mistake.
Zack M. Davis’s Steelmanning is Normal, ITT-passing is Niche is relevant here (but there are 2 caveats, in that in a case where one person just has way more knowledge, ITT is disproprotinately useful, and in cases where emotions are a rate-limiting factor, ITTs are also necessary).
So one of the key things LWers should be expected to do is be able to steelman beliefs (that aren’t moral beliefs) that they think are wrong, and to always focus on the best arguments/evidence.