I wasn’t willing to downvote comments that I thought were actively making the site bad because it seemed too mean.
Can we bring this up to a top-level understanding in our voting system? It seems like we’re confusing stocks and flows in a way that makes it hard to model. There are a bunch of dimensions in the decision to downvote, which get collapsed into one decision:
1) is the post good or bad
2) how has the post been judged so far (current post karma)
3) how will the post be judged in the future (for newish posts, what’s it’s equilibrium karma)
4) how much karma does the poster have, and how will he/she react to my downvote
Mostly, I’d like to ask what’s wrong with the situation where you’re not willing to downvote comments actively making the site bad because it seems mean? If it’s already sufficiently downvoted, I’d argue it _IS_ mean to further downvote it, and you _SHOULD_ choose not to be mean. I really wish I could downvote all the downvotes on a mildly-bad and extremely-unpopular comment on a popular topic.
Background model: my biggest threat model for ways LW can die are commenters that are juuust under the line of “bad enough that it naturally feels right to downvote them.” There’s a lot of new commenters where, if I see a single comment of theirs, I want to give them the benefit of the doubt. But it becomes clear that there’s a pattern there, and that if the site filled up with comments like theirs it’d rapidly become an unfun place to be. (i.e. comments that are slightly clueless, slightly confusing, or slightly uncharitable.
This doesn’t necessarily have to be resolved with downvotes. Given infinite moderator-time, you could resolve it with PMs to each individual person. But given the resources we have that’s not really practical.
That deserves a top-level post: “what’s your threat model for the future of LW”. The death spiral that worries me most is that new commenters/posters are discouraged because their first post inevitably misses some background, common knowledge, or unstated rule, and gets downvoted more than it “should” be. Existing posters eventually get discouraged (or just exhaust their interest), and the site is an echo chamber for the few people smart enough to get upvotes but not smart enough to realize that nobody cares.
Alternately, the site becomes filled with karma whores trying to guess the password to optimize their scores, and diluting the real value of content-based discussion.
first post inevitably misses some background, common knowledge, or unstated rule, and gets downvoted more than it “should” be
There are definitely (at least) two possible failure modes to fall into here. But I’m currently not that worried about newcomers getting downvoted because it empirically doesn’t happen that often (we recently set up moderator-sidebar tools that keep mods in the loop about new comments – we see all comments that end up with one-or-less karma, so we have some sense of what trends are common there).
The biggest complaint we hear about is about overly-critical comments (which I do think contributes to downvotes feeling harsher). But most of those overly-critical comments are from lower-to-mid-karma users. One of the problems I’m hoping to solve here is to avoid making “superficial criticism” a reliable way to grind karma and gain disproportionate control over the site.
Meanwhile, we have a clear case study of how LessWrong died the first time, and this was largely because the experienced users started drifting off – a process that accelerated as the overall quality of the site went down. (This is discussed in some detail on Habryka’s Strategic Overview post, with Scott Alexander’s quote being a succinct description of the problem)
Based on lots of user interviews, it seems like a top priority is making sure LW is a productive place for the top contributors to engage in discussion. This provides the core of content that continues to attract new users in the first place.
Can we bring this up to a top-level understanding in our voting system? It seems like we’re confusing stocks and flows in a way that makes it hard to model. There are a bunch of dimensions in the decision to downvote, which get collapsed into one decision:
1) is the post good or bad
2) how has the post been judged so far (current post karma)
3) how will the post be judged in the future (for newish posts, what’s it’s equilibrium karma)
4) how much karma does the poster have, and how will he/she react to my downvote
Mostly, I’d like to ask what’s wrong with the situation where you’re not willing to downvote comments actively making the site bad because it seems mean? If it’s already sufficiently downvoted, I’d argue it _IS_ mean to further downvote it, and you _SHOULD_ choose not to be mean. I really wish I could downvote all the downvotes on a mildly-bad and extremely-unpopular comment on a popular topic.
Background model: my biggest threat model for ways LW can die are commenters that are juuust under the line of “bad enough that it naturally feels right to downvote them.” There’s a lot of new commenters where, if I see a single comment of theirs, I want to give them the benefit of the doubt. But it becomes clear that there’s a pattern there, and that if the site filled up with comments like theirs it’d rapidly become an unfun place to be. (i.e. comments that are slightly clueless, slightly confusing, or slightly uncharitable.
This doesn’t necessarily have to be resolved with downvotes. Given infinite moderator-time, you could resolve it with PMs to each individual person. But given the resources we have that’s not really practical.
That deserves a top-level post: “what’s your threat model for the future of LW”. The death spiral that worries me most is that new commenters/posters are discouraged because their first post inevitably misses some background, common knowledge, or unstated rule, and gets downvoted more than it “should” be. Existing posters eventually get discouraged (or just exhaust their interest), and the site is an echo chamber for the few people smart enough to get upvotes but not smart enough to realize that nobody cares.
Alternately, the site becomes filled with karma whores trying to guess the password to optimize their scores, and diluting the real value of content-based discussion.
There are definitely (at least) two possible failure modes to fall into here. But I’m currently not that worried about newcomers getting downvoted because it empirically doesn’t happen that often (we recently set up moderator-sidebar tools that keep mods in the loop about new comments – we see all comments that end up with one-or-less karma, so we have some sense of what trends are common there).
The biggest complaint we hear about is about overly-critical comments (which I do think contributes to downvotes feeling harsher). But most of those overly-critical comments are from lower-to-mid-karma users. One of the problems I’m hoping to solve here is to avoid making “superficial criticism” a reliable way to grind karma and gain disproportionate control over the site.
Meanwhile, we have a clear case study of how LessWrong died the first time, and this was largely because the experienced users started drifting off – a process that accelerated as the overall quality of the site went down. (This is discussed in some detail on Habryka’s Strategic Overview post, with Scott Alexander’s quote being a succinct description of the problem)
Based on lots of user interviews, it seems like a top priority is making sure LW is a productive place for the top contributors to engage in discussion. This provides the core of content that continues to attract new users in the first place.