Can you say something about who would be able to see the individual ratings of comments and users?
Only people who police spam/abuse; I imagine they’d have full DB access anyway.
What do you see are the pros and cons of this proposal vs other recent ones.
An excellent question that deserves a longer answer, but in brief: I think it’s more directly targeted towards the goal of creating a quality commons.
What’s the reason for this?
Because I don’t know how else to use the attention of readers who’ve pushed the slider high. Show them both the comment and the reply? That may not make good use of their attention. Show them the reply without the comment? That doesn’t really make sense.
Note that your karma is not simply the sum or average of the scores on your posts; it depends more on how people rate you than on how they rate your posts.
This seems to create an opening for attack.
Again, the abuse team really need full DB access or something very like it to do their jobs.
Can you point to an intro to attack resistant trust metrics
The only adequate introduction I know of is Raph Levien’s PhD draft which I encourage everyone thinking about this problem to read.
Why would it be annoying?
When an untrusted user downvotes, a trusted user or two will end up being shown that content and asked to vote on it; it thus could waste the time of trusted users.
Only people who police spam/abuse [would be able to see the individual ratings of comments and users]
That would make it hard to determine which users I should rate highly. Is the idea that the system would find users who rate similarly to me and recommend them to me, and I would mostly follow those recommendations?
Because I don’t know how else to use the attention of readers who’ve pushed the slider high.
Slashdot shows all the comments in collapsed mode and auto expands the comments that are higher than the filter setting. We can do that, or have a preference setting that let’s the user choose what to do, whether to do that or just hide comments that reply to something rated lower than their filter setting.
You should rate highly people whose judgment you would trust when it differed from yours. We can use machine learning to find people who generate similar ratings to you, if the need arises.
I thought about the Slashdot thing, but I don’t think it makes the best use of people’s time. I’d like people reading only the innermost circle to be able to basically ignore the existence of the other circles. I don’t even want a prompt that says “7 hidden comments”.
You should rate highly people whose judgment you would trust when it differed from yours. We can use machine learning to find people who generate similar ratings to you, if the need arises.
It would be much harder to decide whose judgment I would trust, if I couldn’t see how they rated in the past. I’d have to do it only based on their general reputation and their past posts/comments, but what if some people write good comments but don’t rate the way I would prefer (for example they often downvote others who disagree with them)? The system would also essentially ignore ratings from lurkers, which seems wasteful.
If we use ML to find people who generate similar ratings, that seems to generate bad incentives. When your user rating is low, you’re incentivized to vote the same way as others, so that ML would pick you to recommend to people, then when your rating is high, you’d switch to voting based on your own opinions, which might be totally untrustworthy, but people who already rated you highly wouldn’t be able to tell that they should no longer trust you.
I thought about the Slashdot thing, but I don’t think it makes the best use of people’s time.
Aside from the issue of weird incentives I talked about earlier, I would personally prefer to have the option of viewing highly rated comments independent of parent ratings, since I’ve found those to be valuable to me in other systems (e.g., Slashdot and the current LW). Do you have an argument why that shouldn’t be allowed as a user setting?
It’s hard to be attack resistant and make good use of ratings from lurkers.
The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren’t they?
It’s hard to make a strong argument for “shouldn’t be allowed as a user setting”. There’s an argument for documenting the API so people can write their own clients and do whatever they like. But you have to design the site around the defaults. Because of attention conservation, I think this should be the default, and that people should know that it’s the default when they comment.
The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren’t they?
If everyone can see everyone else’s votes, then when someone who was previous highly rated starts voting in an untrustworthy manner, that would be detectable and the person can at least be down-rated by others who are paying attention. On the other hand, if we had a pure ML system (without any manual trust delegation) then when someone starts deviating from their previous voting patterns the ML algorithm can try to detect that and start discounting their votes. The problem I pointed out seems especially bad in a system where people can’t see others’ votes and depend on ML recommendations to pick who to rate highly, because then neither the humans nor ML can respond to someone changing their pattern of votes after getting a high rating.
Only people who police spam/abuse; I imagine they’d have full DB access anyway.
An excellent question that deserves a longer answer, but in brief: I think it’s more directly targeted towards the goal of creating a quality commons.
Because I don’t know how else to use the attention of readers who’ve pushed the slider high. Show them both the comment and the reply? That may not make good use of their attention. Show them the reply without the comment? That doesn’t really make sense.
Note that your karma is not simply the sum or average of the scores on your posts; it depends more on how people rate you than on how they rate your posts.
Again, the abuse team really need full DB access or something very like it to do their jobs.
The only adequate introduction I know of is Raph Levien’s PhD draft which I encourage everyone thinking about this problem to read.
When an untrusted user downvotes, a trusted user or two will end up being shown that content and asked to vote on it; it thus could waste the time of trusted users.
Thanks for the clarifications.
That would make it hard to determine which users I should rate highly. Is the idea that the system would find users who rate similarly to me and recommend them to me, and I would mostly follow those recommendations?
Slashdot shows all the comments in collapsed mode and auto expands the comments that are higher than the filter setting. We can do that, or have a preference setting that let’s the user choose what to do, whether to do that or just hide comments that reply to something rated lower than their filter setting.
You should rate highly people whose judgment you would trust when it differed from yours. We can use machine learning to find people who generate similar ratings to you, if the need arises.
I thought about the Slashdot thing, but I don’t think it makes the best use of people’s time. I’d like people reading only the innermost circle to be able to basically ignore the existence of the other circles. I don’t even want a prompt that says “7 hidden comments”.
It would be much harder to decide whose judgment I would trust, if I couldn’t see how they rated in the past. I’d have to do it only based on their general reputation and their past posts/comments, but what if some people write good comments but don’t rate the way I would prefer (for example they often downvote others who disagree with them)? The system would also essentially ignore ratings from lurkers, which seems wasteful.
If we use ML to find people who generate similar ratings, that seems to generate bad incentives. When your user rating is low, you’re incentivized to vote the same way as others, so that ML would pick you to recommend to people, then when your rating is high, you’d switch to voting based on your own opinions, which might be totally untrustworthy, but people who already rated you highly wouldn’t be able to tell that they should no longer trust you.
Aside from the issue of weird incentives I talked about earlier, I would personally prefer to have the option of viewing highly rated comments independent of parent ratings, since I’ve found those to be valuable to me in other systems (e.g., Slashdot and the current LW). Do you have an argument why that shouldn’t be allowed as a user setting?
It’s hard to be attack resistant and make good use of ratings from lurkers.
The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren’t they?
It’s hard to make a strong argument for “shouldn’t be allowed as a user setting”. There’s an argument for documenting the API so people can write their own clients and do whatever they like. But you have to design the site around the defaults. Because of attention conservation, I think this should be the default, and that people should know that it’s the default when they comment.
If everyone can see everyone else’s votes, then when someone who was previous highly rated starts voting in an untrustworthy manner, that would be detectable and the person can at least be down-rated by others who are paying attention. On the other hand, if we had a pure ML system (without any manual trust delegation) then when someone starts deviating from their previous voting patterns the ML algorithm can try to detect that and start discounting their votes. The problem I pointed out seems especially bad in a system where people can’t see others’ votes and depend on ML recommendations to pick who to rate highly, because then neither the humans nor ML can respond to someone changing their pattern of votes after getting a high rating.