The issue with the buttons is that 4chan has a campaign to mass downvote anything she does, maybe even bots to do this automatically. Her texts have disappeared from the main page even though they’re very popular, and every comment she posts appears almost immediately with downvotes. The removal of downvotes wouldn’t solve the underlying problem, sure, but it’d make the abuse much more difficult to implement as to remove her texts from public view it’d require the abusers to mass upvote everything else rather than just downvoting her own specific contributions.
Malicious people will take advantage of whatever mechanism is offered.
If you allow comments but not downvotes, they will use comment spam. If you allow downvotes from new accounts, they’ll do the bury-brigade thing. If you require posters to get accounts and use their legal names and real locations, attackers will stalk people, find their homes, scare their children, and leave poo on their doorstep. If you have a mechanism for automatic takedowns of copyright violations, they’ll send forged DMCA notices. (Yes, even though it’s illegal to do so. It’s a common tactic to get the identity of pseudonymous posters, since a DMCA counterclaim requires a statement of the poster’s name, address, and telephone number!)
Attackers of this sort look for asymmetric attacks — things that are relatively cheap, easy, and risk-free for them to do, and cause much more ① grief, and ② time & energy expenditure, on the part of the attacked person or site. The ideal attack is one that is quick and easily repeatable, causes the target great discomfort, and requires the target to spend a bunch of time to clean it up. The intention is to get the target to go away, to cease being visible; to “run them out of town” as it were.
(For an analogy, consider the act of a vandal spray-painting swastikas or dicks on someone’s house. It makes the target feel very unsafe; it causes them a bunch more work to clean up than it cost the vandal to do it; it can be done quickly; and it’s not very risky in a lot of neighborhoods.)
Attackers look for relative advantages — for instance, if the attackers have coding ability and the target does not, they can use automated posting (bots) or denial-of-service attacks. If the attackers have more free time (e.g. if they are unemployed youths and the target is a working mother’s blog), they can post obscene spam or what-have-you at hours when the target is not online to moderate or respond. They also look for ways to increase their advantage — for instance, if they can ascertain the target’s real-world identity while remaining anonymous themselves, the attackers can escalate to more credible threats, harassment with photos, “we know where you live”, or the like.
Responses to this sort of attacker have to address the facts on the ground. They have to make it harder for attackers to drive up the costs (time, labor, and emotional) for legitimate users, without much additional encumbrance on the legitimate users.
The issue with the buttons is that 4chan has a campaign to mass downvote anything she does, maybe even bots to do this automatically. Her texts have disappeared from the main page even though they’re very popular, and every comment she posts appears almost immediately with downvotes. The removal of downvotes wouldn’t solve the underlying problem, sure, but it’d make the abuse much more difficult to implement as to remove her texts from public view it’d require the abusers to mass upvote everything else rather than just downvoting her own specific contributions.
Malicious people will take advantage of whatever mechanism is offered.
If you allow comments but not downvotes, they will use comment spam. If you allow downvotes from new accounts, they’ll do the bury-brigade thing. If you require posters to get accounts and use their legal names and real locations, attackers will stalk people, find their homes, scare their children, and leave poo on their doorstep. If you have a mechanism for automatic takedowns of copyright violations, they’ll send forged DMCA notices. (Yes, even though it’s illegal to do so. It’s a common tactic to get the identity of pseudonymous posters, since a DMCA counterclaim requires a statement of the poster’s name, address, and telephone number!)
Attackers of this sort look for asymmetric attacks — things that are relatively cheap, easy, and risk-free for them to do, and cause much more ① grief, and ② time & energy expenditure, on the part of the attacked person or site. The ideal attack is one that is quick and easily repeatable, causes the target great discomfort, and requires the target to spend a bunch of time to clean it up. The intention is to get the target to go away, to cease being visible; to “run them out of town” as it were.
(For an analogy, consider the act of a vandal spray-painting swastikas or dicks on someone’s house. It makes the target feel very unsafe; it causes them a bunch more work to clean up than it cost the vandal to do it; it can be done quickly; and it’s not very risky in a lot of neighborhoods.)
Attackers look for relative advantages — for instance, if the attackers have coding ability and the target does not, they can use automated posting (bots) or denial-of-service attacks. If the attackers have more free time (e.g. if they are unemployed youths and the target is a working mother’s blog), they can post obscene spam or what-have-you at hours when the target is not online to moderate or respond. They also look for ways to increase their advantage — for instance, if they can ascertain the target’s real-world identity while remaining anonymous themselves, the attackers can escalate to more credible threats, harassment with photos, “we know where you live”, or the like.
Responses to this sort of attacker have to address the facts on the ground. They have to make it harder for attackers to drive up the costs (time, labor, and emotional) for legitimate users, without much additional encumbrance on the legitimate users.