I think you’re comparing the goals of past lesswrong to the goals of present lesswrong. I don’t think present lesswrong really has the goal of refining the art of rationality anymore. Or at least, it has lost interest in developing the one meta-framework to rule them all, and gained much more interest in applying rationality & scholarship to interesting & niche domains, and seeing what generalizable heuristics it can learn from those. Most commonly AI, but look no further than the curated posts to find other examples. To highlight a few:
And I do think, at this stage, this is the right collective move to make. I do often roll my eyes when I see new insight-porn on the front page. It is almost always useless. I think actually going out, doing some world-modeling, and solving problems is what it looks like to refine the art after you’ve read, like, the sequences, and superforecasting, and some linear algebra.
I’m not saying there should be no meta-thinking, but once your epistemics aren’t embarrassingly incompetent, and you’ve absorbed most of the good philosophical arguments here, the way to meta-think will end up looking like doing a bunch of deep dives, forecasts, and solving problems, then coming back to the community, presenting your results, and thinking about how you could’ve done that better.
To borrow an old metaphore, you don’t get good at martial arts by sitting alone in a room thinking about how to be a good martial artist. You get good by going out and actually fighting. And similarly, you shouldn’t trust people giving you “rationality advice” unless they themselves are accomplished in a wide variety of fields (or of course, have sufficiently good (read: mathematical) arguments on their side).
Edit: I think to a large extent whats going on here, also, is nostalgia on your part. The past of LessWrong was different but I don’t think it was better than what we have now. I for one wouldn’t trade one GeneSmith for 10 Duncans!
I think you’re comparing the goals of past lesswrong to the goals of present lesswrong. I don’t think present lesswrong really has the goal of refining the art of rationality anymore. Or at least, it has lost interest in developing the one meta-framework to rule them all, and gained much more interest in applying rationality & scholarship to interesting & niche domains, and seeing what generalizable heuristics it can learn from those. Most commonly AI, but look no further than the curated posts to find other examples. To highlight a few:
How to Make Superbabies
AI 2027: What Superintelligence Looks Like
Explaining British Naval Dominance During the Age of Sail
Will Jesus Christ return in an election year?
Broad-Spectrum Cancer Treatments
And I do think, at this stage, this is the right collective move to make. I do often roll my eyes when I see new insight-porn on the front page. It is almost always useless. I think actually going out, doing some world-modeling, and solving problems is what it looks like to refine the art after you’ve read, like, the sequences, and superforecasting, and some linear algebra.
I’m not saying there should be no meta-thinking, but once your epistemics aren’t embarrassingly incompetent, and you’ve absorbed most of the good philosophical arguments here, the way to meta-think will end up looking like doing a bunch of deep dives, forecasts, and solving problems, then coming back to the community, presenting your results, and thinking about how you could’ve done that better.
To borrow an old metaphore, you don’t get good at martial arts by sitting alone in a room thinking about how to be a good martial artist. You get good by going out and actually fighting. And similarly, you shouldn’t trust people giving you “rationality advice” unless they themselves are accomplished in a wide variety of fields (or of course, have sufficiently good (read: mathematical) arguments on their side).
Edit: I think to a large extent whats going on here, also, is nostalgia on your part. The past of LessWrong was different but I don’t think it was better than what we have now. I for one wouldn’t trade one GeneSmith for 10 Duncans!