I think you might be over-updating from your original post. You had a lot of somewhat unrelated and potentially politically sensitive statements (ethnonationalism, IQ, managerial class, ethics, government debt, taboos, egalitarianism, AI stuff). Even if one agrees with the majority of your points, it is tempting to agreement-downvote due to the minority, especially as they have high valency due to sensitive nature.
I don’t think it’s specific to sensitive topics, Richard just does a lot of sloppy thinking when he tries to engage with politics. His post/talk on more mundane political topics also led to a lot of people on LW & the EA Forum pointing out things he got wrong.
For the record: I do agree that a bunch of my political thinking is sloppy. Right now it feels like I’m facing a tradeoff between speed of conceptual progress and precision of thinking, and I’m optimizing primarily for the former.
One reason I discussed the analogy to ML above is because I hoped it would help people understand why I’m making this tradeoff. For example, I suspect that many LWers remember their thinking about AGI being called sloppy by the mainstream ML community because it didn’t have equations. I think in hindsight it was the correct choice for LW to focus on this kind of “sloppy” exploratory thinking.
Having said that, it’s clearly possible to go too far in this direction, and I regret giving the EAG talk in particular. More generally, there’s a difference between doing sloppy thinking with intellectual collaborators vs broadcasting sloppy thinking to the world. Part of what I’m trying to figure out is the extent to which I should think of LW posts as the former vs the latter.
I regret giving the example of the disagree-votes, it’s not that important to me, and I agree there are all sorts of reasons you might want to disagree-vote my previous post. I’m trying to point at a broader dynamic (and elaborate more on it in this reply to Raemon).
I think you might be over-updating from your original post. You had a lot of somewhat unrelated and potentially politically sensitive statements (ethnonationalism, IQ, managerial class, ethics, government debt, taboos, egalitarianism, AI stuff). Even if one agrees with the majority of your points, it is tempting to agreement-downvote due to the minority, especially as they have high valency due to sensitive nature.
I don’t think it’s specific to sensitive topics, Richard just does a lot of sloppy thinking when he tries to engage with politics. His post/talk on more mundane political topics also led to a lot of people on LW & the EA Forum pointing out things he got wrong.
For the record: I do agree that a bunch of my political thinking is sloppy. Right now it feels like I’m facing a tradeoff between speed of conceptual progress and precision of thinking, and I’m optimizing primarily for the former.
One reason I discussed the analogy to ML above is because I hoped it would help people understand why I’m making this tradeoff. For example, I suspect that many LWers remember their thinking about AGI being called sloppy by the mainstream ML community because it didn’t have equations. I think in hindsight it was the correct choice for LW to focus on this kind of “sloppy” exploratory thinking.
Having said that, it’s clearly possible to go too far in this direction, and I regret giving the EAG talk in particular. More generally, there’s a difference between doing sloppy thinking with intellectual collaborators vs broadcasting sloppy thinking to the world. Part of what I’m trying to figure out is the extent to which I should think of LW posts as the former vs the latter.
I regret giving the example of the disagree-votes, it’s not that important to me, and I agree there are all sorts of reasons you might want to disagree-vote my previous post. I’m trying to point at a broader dynamic (and elaborate more on it in this reply to Raemon).