You talk about using karma thresholds for various things. But traditional lesswrong style karma screens more for quantity than quality of posts, and this would remain true of a version where you weight people’s upvotes and downvotes. I suggest looking for versions which filter more for quality (while not creating too much disincentive to make additional posts/comments).
This is actually a feature, not a bug. The karma threshold isn’t just there to limit who has access to features; it’s also to increase the cost of creating sockpuppets and of recovering from bans.
I think keeping some dependence on quantity is desirable, but that scaling linearly with number of posts weights it too heavily compared to variation in number of upvotes (I proposed scaling with roughly the cube root of number of posts in my explicit formula suggestion elsewhere in the comment thread).
Here’s an example functional form which is the best guess from the top of my head at creating this effect (but I’m giving as an illustration of what to pay attention to rather than a claim that this precisely should be used):
K = (U − 3D) * P^0.3 / R
Where K = Karma U = total (weighted) upvotes D = total (weighted) downvotes P = total number of posts+comments R = total number of reads of your posts+comments
I also agree with the spirit of this, but I think dividing upvotes by the number of reads penalizes reads excessively, because each reader doesn’t decide how to vote independently. Once a post already has a high score, a new reader is not likely to upvote it more even if they think it’s high quality. Also we ought to encourage people to create highly popular articles that spread our ideas beyond the local community, and this system would serve to discourage that. On the other hand we also don’t want to penalize people for writing specialized content that only a few others might read. I’m not sure what the right solution is here.
I agree with the spirit of this. That said, if the goal is to calculate a Karma score which fails to be fooled by a user posting a large amount of low quality content, it might be better to do something roughly: sum((P*x if x < 0 else max(0, x-T)) for x in post_and_comment_scores). Only comments that hit a certain bar should count at all. Here P is the penalty multiplier for creating bad content, and T is the threshold a comment score needs to meet to begin counting as good content. Of course, I also agree that it’s probably worth weighting upvotes and downvotes separately and normalizing by reads to calculate these per-(comment or post) scores.
My current interpretation is that you mean that people who write a lot of content generally get much more karma than people who write little but very good content.
I agree with that, and have been thinking about good ways of dealing with that. Here are two approaches:
When deciding what to show the user, use an algorithm that combines the information of: 1) How many upvotes did this piece of content get? 2) How many users have seen this piece of content? 3) How many downvotes did this piece of content get
Allow users to give out variable Karma rewards, with some cost attached to them. Maybe Karma transfers, or some limited amount of currency that’s generated based on your current karma. Top comments would then receive more of this limited amount of currency.
We could have some scarce resource based on karma. Not paying with karma directly, because I guess losing karma would feel bad, but rather that with each 100 karma points you get 1 “credit”.
You could then spend those credits e.g. on visually highlighting other people’s comments and articles. Something like when Reddit displays that a comment got “Reddit gold”. It could even transfer some karma (but much less than it costs) to the rewarded user, but mostly it would be a costly signal of “I really liked this”, with the name of person giving the reward displayed as a tooltip. A costly version of “+1 nice”, essentially.
One variation of karma system I’d like is the ability to rate posts as being exceptionally good (probably taking more than one click, to introduce a trivial inconvenience so that it isn’t used all the time like five star ratings are). This would give more ability to pick out very useful contributors from small numbers of posts.
Agree. One of the broader things I have been thinking of is a similar two-tier voting system like Facebook has.
There is the primary interaction of upvoting and downvoting, but then there are additional vote-types you can access with an additional click (on Facebook “angry”, “sad”, etc. here it would be “exceptionally good point”, “needs clarification”, “too agressive” or something along those lines).
First click could be the generic upvote or downvote; then using a second click you could pick a more specific “flavor” of the vote. (Different flavors for upvotes, and for downvotes.)
Being able to sort by some of those would also be helpful (e.g. Sort comments by ‘exceptional insight’, don’t show me comments with >2 ‘overly aggressive’.)
You mentioned an alternative comment structure in the features post. Some of the value of that could be achieved by a 2nd tier vote saying a comment is a key consideration (e.g. “This is a crux”), and being able to sort by that.
You talk about using karma thresholds for various things. But traditional lesswrong style karma screens more for quantity than quality of posts, and this would remain true of a version where you weight people’s upvotes and downvotes. I suggest looking for versions which filter more for quality (while not creating too much disincentive to make additional posts/comments).
This is actually a feature, not a bug. The karma threshold isn’t just there to limit who has access to features; it’s also to increase the cost of creating sockpuppets and of recovering from bans.
I think keeping some dependence on quantity is desirable, but that scaling linearly with number of posts weights it too heavily compared to variation in number of upvotes (I proposed scaling with roughly the cube root of number of posts in my explicit formula suggestion elsewhere in the comment thread).
Here’s an example functional form which is the best guess from the top of my head at creating this effect (but I’m giving as an illustration of what to pay attention to rather than a claim that this precisely should be used):
K = (U − 3D) * P^0.3 / R
Where
K = Karma
U = total (weighted) upvotes
D = total (weighted) downvotes
P = total number of posts+comments
R = total number of reads of your posts+comments
I also agree with the spirit of this, but I think dividing upvotes by the number of reads penalizes reads excessively, because each reader doesn’t decide how to vote independently. Once a post already has a high score, a new reader is not likely to upvote it more even if they think it’s high quality. Also we ought to encourage people to create highly popular articles that spread our ideas beyond the local community, and this system would serve to discourage that. On the other hand we also don’t want to penalize people for writing specialized content that only a few others might read. I’m not sure what the right solution is here.
I agree with the spirit of this. That said, if the goal is to calculate a Karma score which fails to be fooled by a user posting a large amount of low quality content, it might be better to do something roughly: sum((P*x if x < 0 else max(0, x-T)) for x in post_and_comment_scores). Only comments that hit a certain bar should count at all. Here P is the penalty multiplier for creating bad content, and T is the threshold a comment score needs to meet to begin counting as good content. Of course, I also agree that it’s probably worth weighting upvotes and downvotes separately and normalizing by reads to calculate these per-(comment or post) scores.
I was just writing a very similar function in one of the comments above!
I think something in this direction makes sense.
My current interpretation is that you mean that people who write a lot of content generally get much more karma than people who write little but very good content.
I agree with that, and have been thinking about good ways of dealing with that. Here are two approaches:
When deciding what to show the user, use an algorithm that combines the information of: 1) How many upvotes did this piece of content get? 2) How many users have seen this piece of content? 3) How many downvotes did this piece of content get
Allow users to give out variable Karma rewards, with some cost attached to them. Maybe Karma transfers, or some limited amount of currency that’s generated based on your current karma. Top comments would then receive more of this limited amount of currency.
We could have some scarce resource based on karma. Not paying with karma directly, because I guess losing karma would feel bad, but rather that with each 100 karma points you get 1 “credit”.
You could then spend those credits e.g. on visually highlighting other people’s comments and articles. Something like when Reddit displays that a comment got “Reddit gold”. It could even transfer some karma (but much less than it costs) to the rewarded user, but mostly it would be a costly signal of “I really liked this”, with the name of person giving the reward displayed as a tooltip. A costly version of “+1 nice”, essentially.
One variation of karma system I’d like is the ability to rate posts as being exceptionally good (probably taking more than one click, to introduce a trivial inconvenience so that it isn’t used all the time like five star ratings are). This would give more ability to pick out very useful contributors from small numbers of posts.
Agree. One of the broader things I have been thinking of is a similar two-tier voting system like Facebook has.
There is the primary interaction of upvoting and downvoting, but then there are additional vote-types you can access with an additional click (on Facebook “angry”, “sad”, etc. here it would be “exceptionally good point”, “needs clarification”, “too agressive” or something along those lines).
First click could be the generic upvote or downvote; then using a second click you could pick a more specific “flavor” of the vote. (Different flavors for upvotes, and for downvotes.)
I think publicly applying badges to a comment should be completely orthogonal to anonymously voting on it. EDIT: now a feature request.
I’d like to see my old suggestion either folded into this feature, or perhaps as an independent one, whichever makes more sense.
That’s a really neat idea.
Being able to sort by some of those would also be helpful (e.g. Sort comments by ‘exceptional insight’, don’t show me comments with >2 ‘overly aggressive’.)
You mentioned an alternative comment structure in the features post. Some of the value of that could be achieved by a 2nd tier vote saying a comment is a key consideration (e.g. “This is a crux”), and being able to sort by that.