At least in my experience having high karma is very little evidence of being a good commenter. It’s almost exclusively evidence of being a frequent commenter, which also happens to be the people most important to rate limit. We have some rules that apply on recent karma, and they are generally less harsh, and then some much harsher rules based on total karma, so we do take this into account, but overall I think it’s crucial for rate limiting to apply to high karma accounts as well.
We have considered applying rate limits based on high average karma, but haven’t done so because I don’t want to disincentivize productive niche conversations, but it seems like a better start.
Why don’t you make it so karma from posts gives much higher boost to total karma than karma from comments (maybe 5x, possibly even 10x)? This has seemed like an obvious improvement to me for a while. (If you don’t wanna inflate total karma, you could instead do 13x vs. 3x or something.)
Sometimes I wonder whether there should just be nonlinear returns to karma on any item. Like, a 100 karma post should count for much more than 20 5-karma comments / posts.
I feel like it’s a thing where you should use human moderator judgment once the account isn’t new. Figure out how the person is being counterproductive, warn them about it, and if they keep doing the thing, ban them. Ongoing mechanisms like this make sense for something like Reddit where there is basically zero community at this point, but on LW if someone is sufficiently detached from the forum and community that it actually makes sense to apply a mechanical paper cut like the rate limit on them after years of them being on site and accumulating positive karma, they probably shouldn’t be here to begin with.
The basic problem is that it’s not treating the person as a person, like a human moderator actually talking to them and going “hey, we think you’re not helping here, here’s why … in the future could you …” (and then proceeding to a ban if there’s no improvement) would be. People occasionally respond well the moderator feedback, but being hit by the rate limiter robot is pretty likely to piss off just about any invested and competent person and might also make them go “cool, then I’ll treat your thing as less of a community made of people and more like a video game to beat on my end as well”, which makes it less likely for things to be improved in the future.
I think the impartiality really helps. The default thing that happens if we threaten moderation action on any specific individual with a long history on LW is that they feel personally persecuted, complain about it publicly, and try to generally rile up a bunch of social momentum to defend against the prosecution, which then produces a lot of distrust and paranoia and stress for everyone involved.
A nice thing about automatic rate limits is that it’s really transparent we are not doing some kind of strategic purging of dissenters or are trying to find the most politically convenient pretense by which to ban someone, which many people tend to be worried about (I think not without reason given the outside view on the realities of human politics). I think for many people it is much less stressful to interact with a deterministic machine than a human who could potentially be pulling some kind of galaxy brained strategic moves at each step.
they probably shouldn’t be here to begin with.
Many people get triggered for a while. LessWrong commenters change in quality. People get caught in some horrible demon-thread where they feel like they have to keep saying things or lose face. Temporary rate-limits do actually catch many of those cases reasonably well.
The basic problem is that it’s not treating the person as a person, like a human moderator actually talking to them and going “hey, we think you’re not helping here, here’s why … in the future could you …” (and then proceeding to a ban if there’s no improvement) would be.
To be clear, the thing I would do instead of a ban in most cases is an intense rate limit. They just have much better properties in terms of not completely banning certain viewpoints from the site for most cases.
I also think you vastly overestimate our ability to give people constructive feedback. New content review and moderation currently already takes up around one full-time equivalent on average. We don’t have time to do much more of that.
And lastly, I also think you just underestimate the internet’s tendencies to desperately freak out if you ever try to ban anyone. Every time we consider banning any long-term contributor, no matter how obviously harmful they seem for the site, we have dozens of people who otherwise leave good comments come out of the wood work, strong-vote downvote anything even remotely adjacent to the discussion that tries to explain the rationale, complain in like 15 different places, threaten to leave the site, threaten to become enemies of LessWrong forever, and all kinds of things. I think some of that instinct is healthy, but I really think you are vastly vastly underestimating the cost associated with banning a long-time contributor.
At least in my experience having high karma is very little evidence of being a good commenter. It’s almost exclusively evidence of being a frequent commenter, which also happens to be the people most important to rate limit. We have some rules that apply on recent karma, and they are generally less harsh, and then some much harsher rules based on total karma, so we do take this into account, but overall I think it’s crucial for rate limiting to apply to high karma accounts as well.
We have considered applying rate limits based on high average karma, but haven’t done so because I don’t want to disincentivize productive niche conversations, but it seems like a better start.
Why don’t you make it so karma from posts gives much higher boost to total karma than karma from comments (maybe 5x, possibly even 10x)? This has seemed like an obvious improvement to me for a while. (If you don’t wanna inflate total karma, you could instead do 13x vs. 3x or something.)
Something a bit awkward about this is it incentivizes making long comments and quick takes into low-effort posts.
Sometimes I wonder whether there should just be nonlinear returns to karma on any item. Like, a 100 karma post should count for much more than 20 5-karma comments / posts.
I feel like it’s a thing where you should use human moderator judgment once the account isn’t new. Figure out how the person is being counterproductive, warn them about it, and if they keep doing the thing, ban them. Ongoing mechanisms like this make sense for something like Reddit where there is basically zero community at this point, but on LW if someone is sufficiently detached from the forum and community that it actually makes sense to apply a mechanical paper cut like the rate limit on them after years of them being on site and accumulating positive karma, they probably shouldn’t be here to begin with.
The basic problem is that it’s not treating the person as a person, like a human moderator actually talking to them and going “hey, we think you’re not helping here, here’s why … in the future could you …” (and then proceeding to a ban if there’s no improvement) would be. People occasionally respond well the moderator feedback, but being hit by the rate limiter robot is pretty likely to piss off just about any invested and competent person and might also make them go “cool, then I’ll treat your thing as less of a community made of people and more like a video game to beat on my end as well”, which makes it less likely for things to be improved in the future.
I think the impartiality really helps. The default thing that happens if we threaten moderation action on any specific individual with a long history on LW is that they feel personally persecuted, complain about it publicly, and try to generally rile up a bunch of social momentum to defend against the prosecution, which then produces a lot of distrust and paranoia and stress for everyone involved.
A nice thing about automatic rate limits is that it’s really transparent we are not doing some kind of strategic purging of dissenters or are trying to find the most politically convenient pretense by which to ban someone, which many people tend to be worried about (I think not without reason given the outside view on the realities of human politics). I think for many people it is much less stressful to interact with a deterministic machine than a human who could potentially be pulling some kind of galaxy brained strategic moves at each step.
Many people get triggered for a while. LessWrong commenters change in quality. People get caught in some horrible demon-thread where they feel like they have to keep saying things or lose face. Temporary rate-limits do actually catch many of those cases reasonably well.
To be clear, the thing I would do instead of a ban in most cases is an intense rate limit. They just have much better properties in terms of not completely banning certain viewpoints from the site for most cases.
I also think you vastly overestimate our ability to give people constructive feedback. New content review and moderation currently already takes up around one full-time equivalent on average. We don’t have time to do much more of that.
And lastly, I also think you just underestimate the internet’s tendencies to desperately freak out if you ever try to ban anyone. Every time we consider banning any long-term contributor, no matter how obviously harmful they seem for the site, we have dozens of people who otherwise leave good comments come out of the wood work, strong-vote downvote anything even remotely adjacent to the discussion that tries to explain the rationale, complain in like 15 different places, threaten to leave the site, threaten to become enemies of LessWrong forever, and all kinds of things. I think some of that instinct is healthy, but I really think you are vastly vastly underestimating the cost associated with banning a long-time contributor.