It’s hard to be attack resistant and make good use of ratings from lurkers.
The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren’t they?
It’s hard to make a strong argument for “shouldn’t be allowed as a user setting”. There’s an argument for documenting the API so people can write their own clients and do whatever they like. But you have to design the site around the defaults. Because of attention conservation, I think this should be the default, and that people should know that it’s the default when they comment.
The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren’t they?
If everyone can see everyone else’s votes, then when someone who was previous highly rated starts voting in an untrustworthy manner, that would be detectable and the person can at least be down-rated by others who are paying attention. On the other hand, if we had a pure ML system (without any manual trust delegation) then when someone starts deviating from their previous voting patterns the ML algorithm can try to detect that and start discounting their votes. The problem I pointed out seems especially bad in a system where people can’t see others’ votes and depend on ML recommendations to pick who to rate highly, because then neither the humans nor ML can respond to someone changing their pattern of votes after getting a high rating.
It’s hard to be attack resistant and make good use of ratings from lurkers.
The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren’t they?
It’s hard to make a strong argument for “shouldn’t be allowed as a user setting”. There’s an argument for documenting the API so people can write their own clients and do whatever they like. But you have to design the site around the defaults. Because of attention conservation, I think this should be the default, and that people should know that it’s the default when they comment.
If everyone can see everyone else’s votes, then when someone who was previous highly rated starts voting in an untrustworthy manner, that would be detectable and the person can at least be down-rated by others who are paying attention. On the other hand, if we had a pure ML system (without any manual trust delegation) then when someone starts deviating from their previous voting patterns the ML algorithm can try to detect that and start discounting their votes. The problem I pointed out seems especially bad in a system where people can’t see others’ votes and depend on ML recommendations to pick who to rate highly, because then neither the humans nor ML can respond to someone changing their pattern of votes after getting a high rating.