agreed, I’ve seen instances of contributors I think would have pushed the field forward being run off the site before they learned norms due to high magnitude of feedback fast. the unfortunate thing is, in the low dimensional representation karma presents right now, any move appears to make things worse. I think making downvotes and agree votes targeted like reacts might be one option to consider. another would be a warning when downvoting past thresholds to remind users to consider whether they want to take certain actions and introduce a trivial inconvenience; eg, some hypothetical warnings (which need shortening to be usable in UI):
“you’re about to downvote this person past visibility. please take 10 seconds to decide if you endorse people in your position making a downvote of this sort of post.”
“you might be about to downvote this person enough to activate rate limiting on their posts. If you value their posting frequency, please upvote something else recent from them or reduce this downvote. Please take 30 seconds to decide if you intend to do this.”
possibly the downvote-warnings should have a random offset of up to 3 karma or something, so that the person who pushes them over the edge only has some probability of being the one who gets the feedback, rather than the only one—effectively a form of dropout in the feedback routing.
also, what if you could only strong agree or strong karma vote?
eigenkarma would be a good idea if <mumble mumble> - I prototyped a version of it and might still be interested in doing more, but I suspect ultimately most of the difficulty of doing something like this well is in designing the linkup between human prompts and incentives, in that you need to be prompting users about what sort of incentives they want to produce for others (out of the ones a system makes available to transmit), at the same time as designing a numeric system that makes incentive-producing actions available that work well.
the LW team seems awfully hesitant to mess with it, and I think they’re accepting rather huge loss for the world by doing that, but I guess they’ve got other large losses to think about and it’s hard to evaluate (even for me, I’m not saying they’re wrong) whether this is actually the highest priority problem.
agreed, I’ve seen instances of contributors I think would have pushed the field forward being run off the site before they learned norms due to high magnitude of feedback fast. the unfortunate thing is, in the low dimensional representation karma presents right now, any move appears to make things worse. I think making downvotes and agree votes targeted like reacts might be one option to consider. another would be a warning when downvoting past thresholds to remind users to consider whether they want to take certain actions and introduce a trivial inconvenience; eg, some hypothetical warnings (which need shortening to be usable in UI):
“you’re about to downvote this person past visibility. please take 10 seconds to decide if you endorse people in your position making a downvote of this sort of post.”
“you might be about to downvote this person enough to activate rate limiting on their posts. If you value their posting frequency, please upvote something else recent from them or reduce this downvote. Please take 30 seconds to decide if you intend to do this.” possibly the downvote-warnings should have a random offset of up to 3 karma or something, so that the person who pushes them over the edge only has some probability of being the one who gets the feedback, rather than the only one—effectively a form of dropout in the feedback routing.
also, what if you could only strong agree or strong karma vote?
eigenkarma would be a good idea if <mumble mumble> - I prototyped a version of it and might still be interested in doing more, but I suspect ultimately most of the difficulty of doing something like this well is in designing the linkup between human prompts and incentives, in that you need to be prompting users about what sort of incentives they want to produce for others (out of the ones a system makes available to transmit), at the same time as designing a numeric system that makes incentive-producing actions available that work well.
the LW team seems awfully hesitant to mess with it, and I think they’re accepting rather huge loss for the world by doing that, but I guess they’ve got other large losses to think about and it’s hard to evaluate (even for me, I’m not saying they’re wrong) whether this is actually the highest priority problem.