My theory is that the main things that matter are content and enforcement of strong intellectual norms, and both degraded around the time a few major high-status members of the community mostly stopped posting (e.g. Eliezer and Yvain.)
The problem with lack of content is obvious, the problem with lack of enforcement is that most discussions are not very good, and it takes a significant amount of feedback to make them better. But it’s hard for people to get away with giving subtle criticism unless they’re already a high-status member of a community, and upvotes/downvotes are just not sufficiently granular.
This feels like a good start but one that needs significant improvement too.
For instance, I’m wondering how much of the situation Anna laments is a result of LW lacking an explicit editorial policy. I for one never quite felt sure what was or wasn’t relevant for LW—what had a shot at being promoted—and the few posts I wrote here had a tentative aspect to them because of this. I can’t yet articulate why I stopped posting, but it may have had something to do with my writing a bunch of substantive posts that were never promoted to Main.
If you look at the home page only (recent articles in Main) you could draw the inference that the main topics on LessWrong are MIRI, CFAR, FHI, “the LessWrong community”, with a side dish of AI safety and startup founder psychology. This doesn’t feel aligned with “refining the art of human rationality”, it makes LessWrong feel like more of a corporate blog.
My theory is that the main things that matter are content and enforcement of strong intellectual norms, and both degraded around the time a few major high-status members of the community mostly stopped posting (e.g. Eliezer and Yvain.)
The problem with lack of content is obvious, the problem with lack of enforcement is that most discussions are not very good, and it takes a significant amount of feedback to make them better. But it’s hard for people to get away with giving subtle criticism unless they’re already a high-status member of a community, and upvotes/downvotes are just not sufficiently granular.
This feels like a good start but one that needs significant improvement too.
For instance, I’m wondering how much of the situation Anna laments is a result of LW lacking an explicit editorial policy. I for one never quite felt sure what was or wasn’t relevant for LW—what had a shot at being promoted—and the few posts I wrote here had a tentative aspect to them because of this. I can’t yet articulate why I stopped posting, but it may have had something to do with my writing a bunch of substantive posts that were never promoted to Main.
If you look at the home page only (recent articles in Main) you could draw the inference that the main topics on LessWrong are MIRI, CFAR, FHI, “the LessWrong community”, with a side dish of AI safety and startup founder psychology. This doesn’t feel aligned with “refining the art of human rationality”, it makes LessWrong feel like more of a corporate blog.
Agree that a lot more clarity would help.
Assuming Viliam’s comment on the troll is accurate, that’s probably sufficient to explain the decline: http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2n