What do you think about the StackExchange model of moderation here, where people gradually acquire more moderation powers the higher their karma? What I really like about this model is that it doesn’t require that any one person do a lot of work. Of course it’s susceptible to value drift: it’s unclear what you’ll end up optimizing for once you put power in the hands of people who do the best at getting upvotes from whoever else is out there. Arguably it made MathOverflow less interesting in the long run: I think most of the high-rep people there (including me) eventually got too trigger-happy about closing “soft” questions that weren’t just looking for technical results.
The StackExchange model includes clear rules about what content is allowed and what content isn’t. When it comes to Lesswrong or the comment thread of a blog, deleting that much content isn’t good.
What do you think about the StackExchange model of moderation here, where people gradually acquire more moderation powers the higher their karma? What I really like about this model is that it doesn’t require that any one person do a lot of work. Of course it’s susceptible to value drift: it’s unclear what you’ll end up optimizing for once you put power in the hands of people who do the best at getting upvotes from whoever else is out there. Arguably it made MathOverflow less interesting in the long run: I think most of the high-rep people there (including me) eventually got too trigger-happy about closing “soft” questions that weren’t just looking for technical results.
I’d advocate “upvoted by people with lots of karma” as a feature to use for prediction, with the trusted moderator still the ground truth.
This is similar to my eigendemocracy-based proposal here.
The StackExchange model includes clear rules about what content is allowed and what content isn’t. When it comes to Lesswrong or the comment thread of a blog, deleting that much content isn’t good.