Upvote/downvote symmetry encourages conformism. Why not analize what good and bad may come from particular posts/comments from rational point of view and adjust the system?
Good: The material contains some usefull information or insight. Users notice that and encourage by upvoting. Seems fine to me as it is.
Bad: The material wastes time and attention of readers. There may also be objective reasons for removal, like infohazards or violation of rules. But if some readers feel offended by the content because it questioned their beliefs, it isn’t necessarily a valid reason for its removal. So I suggest to reconsider downvoting system.
Regarding the time waste: A post with properly specified title prevents non-interested readers from looking inside and only consumes a line in the list. While a clickbait lures readers inside without giving them any good. A hard to parse but useless text even more annoying. So, perhaps the total time spent by non-upvoting users multiplied by their vote power could work as a downvote penalty?
agreed, I’ve seen instances of contributors I think would have pushed the field forward being run off the site before they learned norms due to high magnitude of feedback fast. the unfortunate thing is, in the low dimensional representation karma presents right now, any move appears to make things worse. I think making downvotes and agree votes targeted like reacts might be one option to consider. another would be a warning when downvoting past thresholds to remind users to consider whether they want to take certain actions and introduce a trivial inconvenience; eg, some hypothetical warnings (which need shortening to be usable in UI):
“you’re about to downvote this person past visibility. please take 10 seconds to decide if you endorse people in your position making a downvote of this sort of post.”
“you might be about to downvote this person enough to activate rate limiting on their posts. If you value their posting frequency, please upvote something else recent from them or reduce this downvote. Please take 30 seconds to decide if you intend to do this.”
possibly the downvote-warnings should have a random offset of up to 3 karma or something, so that the person who pushes them over the edge only has some probability of being the one who gets the feedback, rather than the only one—effectively a form of dropout in the feedback routing.
also, what if you could only strong agree or strong karma vote?
eigenkarma would be a good idea if <mumble mumble> - I prototyped a version of it and might still be interested in doing more, but I suspect ultimately most of the difficulty of doing something like this well is in designing the linkup between human prompts and incentives, in that you need to be prompting users about what sort of incentives they want to produce for others (out of the ones a system makes available to transmit), at the same time as designing a numeric system that makes incentive-producing actions available that work well.
the LW team seems awfully hesitant to mess with it, and I think they’re accepting rather huge loss for the world by doing that, but I guess they’ve got other large losses to think about and it’s hard to evaluate (even for me, I’m not saying they’re wrong) whether this is actually the highest priority problem.
The system is never going to be all that great—it’s really lightweight, low-information, low-committment to cast a vote. That’s a big weakness, and also a requirement to get any input at all from many readers.
It roughly maps to “want to see more of” and “want to see less of” on LessWrong, but it’s noisy enough that it shouldn’t be taken too literally.
Upvote/downvote symmetry encourages conformism. Why not analize what good and bad may come from particular posts/comments from rational point of view and adjust the system?
Good: The material contains some usefull information or insight. Users notice that and encourage by upvoting. Seems fine to me as it is.
Bad: The material wastes time and attention of readers. There may also be objective reasons for removal, like infohazards or violation of rules. But if some readers feel offended by the content because it questioned their beliefs, it isn’t necessarily a valid reason for its removal. So I suggest to reconsider downvoting system.
Regarding the time waste: A post with properly specified title prevents non-interested readers from looking inside and only consumes a line in the list. While a clickbait lures readers inside without giving them any good. A hard to parse but useless text even more annoying. So, perhaps the total time spent by non-upvoting users multiplied by their vote power could work as a downvote penalty?
agreed, I’ve seen instances of contributors I think would have pushed the field forward being run off the site before they learned norms due to high magnitude of feedback fast. the unfortunate thing is, in the low dimensional representation karma presents right now, any move appears to make things worse. I think making downvotes and agree votes targeted like reacts might be one option to consider. another would be a warning when downvoting past thresholds to remind users to consider whether they want to take certain actions and introduce a trivial inconvenience; eg, some hypothetical warnings (which need shortening to be usable in UI):
“you’re about to downvote this person past visibility. please take 10 seconds to decide if you endorse people in your position making a downvote of this sort of post.”
“you might be about to downvote this person enough to activate rate limiting on their posts. If you value their posting frequency, please upvote something else recent from them or reduce this downvote. Please take 30 seconds to decide if you intend to do this.” possibly the downvote-warnings should have a random offset of up to 3 karma or something, so that the person who pushes them over the edge only has some probability of being the one who gets the feedback, rather than the only one—effectively a form of dropout in the feedback routing.
also, what if you could only strong agree or strong karma vote?
eigenkarma would be a good idea if <mumble mumble> - I prototyped a version of it and might still be interested in doing more, but I suspect ultimately most of the difficulty of doing something like this well is in designing the linkup between human prompts and incentives, in that you need to be prompting users about what sort of incentives they want to produce for others (out of the ones a system makes available to transmit), at the same time as designing a numeric system that makes incentive-producing actions available that work well.
the LW team seems awfully hesitant to mess with it, and I think they’re accepting rather huge loss for the world by doing that, but I guess they’ve got other large losses to think about and it’s hard to evaluate (even for me, I’m not saying they’re wrong) whether this is actually the highest priority problem.
The system is never going to be all that great—it’s really lightweight, low-information, low-committment to cast a vote. That’s a big weakness, and also a requirement to get any input at all from many readers.
It roughly maps to “want to see more of” and “want to see less of” on LessWrong, but it’s noisy enough that it shouldn’t be taken too literally.