For the record: I do agree that a bunch of my political thinking is sloppy. Right now it feels like I’m facing a tradeoff between speed of conceptual progress and precision of thinking, and I’m optimizing primarily for the former.
One reason I discussed the analogy to ML above is because I hoped it would help people understand why I’m making this tradeoff. For example, I suspect that many LWers remember their thinking about AGI being called sloppy by the mainstream ML community because it didn’t have equations. I think in hindsight it was the correct choice for LW to focus on this kind of “sloppy” exploratory thinking.
Having said that, it’s clearly possible to go too far in this direction, and I regret giving the EAG talk in particular. More generally, there’s a difference between doing sloppy thinking with intellectual collaborators vs broadcasting sloppy thinking to the world. Part of what I’m trying to figure out is the extent to which I should think of LW posts as the former vs the latter.
For the record: I do agree that a bunch of my political thinking is sloppy. Right now it feels like I’m facing a tradeoff between speed of conceptual progress and precision of thinking, and I’m optimizing primarily for the former.
One reason I discussed the analogy to ML above is because I hoped it would help people understand why I’m making this tradeoff. For example, I suspect that many LWers remember their thinking about AGI being called sloppy by the mainstream ML community because it didn’t have equations. I think in hindsight it was the correct choice for LW to focus on this kind of “sloppy” exploratory thinking.
Having said that, it’s clearly possible to go too far in this direction, and I regret giving the EAG talk in particular. More generally, there’s a difference between doing sloppy thinking with intellectual collaborators vs broadcasting sloppy thinking to the world. Part of what I’m trying to figure out is the extent to which I should think of LW posts as the former vs the latter.