(In principle, one could change the system by changing how the LW karma system works in a way that eliminates the possibility of anonymous mass-downvoting. In practice, so far no one has proposed a change that would accomplish this and so far as I know no one knows of any such change that would work well. And in practice, even with such a change fully designed it would then be necessary to arrange for it to be incorporated into the LW codebase; it is reasonable to suspect that the odds of that are not good.)
This is one of the conversations I was hoping would be sparked by my complaint, and is the reason why I did not mention names until pressured. (My cost/benefit analysis of mentioning names may well have been flawed; if it was, I will gladly redact [although I’m not sure how much harm that would mitigate at this point {yay recursive parentheses!}])
I would like to see a system that flags a human administrator to review block downvotes. I agree that having an automatic punishment that flags if you downvote everything is absurd, but that’s a strawman. Something like this is completely viable:
If I am downvoting someone whom I have already downvoted over 70% of their posts, AND their net karma is greater than 60%, automatically forward the downvoted post, and the downvoter’s name, to a human admin for investigation. (I might make the algorithm slightly more aware, and say that [downvotes—upvotes > 70% of posts]).
If the downvotee is clearly a troll, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment. If the downvoter is clearly being retributive, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment.
Since it’s automatic and only triggers on the downvoter’s action, a potential downvotee can’t use it as part of a ‘wounded gazelle’ gambit. Since it punts the actual decision-making to a human whom the community has already invested admin status, ‘literal genie’/automation concerns are replaced with human expertise. The only concern left is that admins will fail to be impartial or will fail to do their job, in which case the community has far bigger problems.
The only concern left is that admins will fail to be impartial or will fail to do their job, in which case the community has far bigger problems.
Also that as the set of tasks described as “their job” increases, it becomes less likely that trusted uncompensated human admins will be interested in the job.
Also that as the set of tasks described as “their job” increases, it becomes less likely that trusted uncompensated human admins will be interested in the job.
...and that, yes. I shall meditate upon this further.
This is one of the conversations I was hoping would be sparked by my complaint, and is the reason why I did not mention names until pressured. (My cost/benefit analysis of mentioning names may well have been flawed; if it was, I will gladly redact [although I’m not sure how much harm that would mitigate at this point {yay recursive parentheses!}])
I would like to see a system that flags a human administrator to review block downvotes. I agree that having an automatic punishment that flags if you downvote everything is absurd, but that’s a strawman. Something like this is completely viable:
If I am downvoting someone whom I have already downvoted over 70% of their posts, AND their net karma is greater than 60%, automatically forward the downvoted post, and the downvoter’s name, to a human admin for investigation. (I might make the algorithm slightly more aware, and say that [downvotes—upvotes > 70% of posts]).
If the downvotee is clearly a troll, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment. If the downvoter is clearly being retributive, a human admin (who is already trusted with this position) will be in an excellent position to make that judgment.
Since it’s automatic and only triggers on the downvoter’s action, a potential downvotee can’t use it as part of a ‘wounded gazelle’ gambit. Since it punts the actual decision-making to a human whom the community has already invested admin status, ‘literal genie’/automation concerns are replaced with human expertise. The only concern left is that admins will fail to be impartial or will fail to do their job, in which case the community has far bigger problems.
Also that as the set of tasks described as “their job” increases, it becomes less likely that trusted uncompensated human admins will be interested in the job.
...and that, yes. I shall meditate upon this further.
This seems like an excellent solution.