I have some suggestions for mechanistic improvements to the LW website that may help alleviate some of the issues presented here.
RE: Comment threads with wild swings in upvotes/downvotes due to participation from few users with large vote-weights; a capping/scaling factor on either total comment karma or individual vote-weights could solve this issue. An example total-karma-capping mechanism would be limiting the absolute value of the displayed karma for a comment to twice its parent’s karma. An example vote-weight-capping mechanism would be limiting vote weight to the number of votes on a comment. The total-cap mechanism seems easier to implement if LW just records the total karma for a comment rather than maintaining the set of all votes on it. Any mechanism like those described has some issues though, including the possibility of users voting on something but not seeing the total karma change at all.
RE: Post authors (and commenters) not having enough information about the behavior of specific commenters when deciding whether/how to engage with them, and the cruelty of automatically attaching preemptive dismissals to comments; it does not seem more cruel to publicly tag a user’s comments with a warning box saying “critiques from this user are usually not substantive/relevant” than to ban them. This turns hard-censorship into soft-censorship, which seems less dangerous to me, and also like it could be more easily applied by moderators without requiring hundreds of hours of deliberation.
RE: Going after only the most legible offender(s) rather than the worst one(s); Giving users and moderators the ability to mark a commenters interactions throughout a thread as “overall unproductive/irrelevant/corrosive/bad-faith” in a way that allows users to track who they’ve had bad interactions with in the past, and allows moderators better visibility into who is behaving badly even when they have not personally seen the bad behavior (with the built-in bonus of marking examples). These marks should only be visible to the user assigning them and to moderators for what I think are obvious reasons. A more general version of this system would be the ability to assign tags to users for a specific comment/chain (e.g. “knowledgeable about African history”, “bad-faith arguer” that link back to the comment which inspired the tag. Such a system is useful for users who have a hard time remembering usernames, but could also unfortunately result in ignoring good arguments from people after a single bad interaction.
Meta: I am new and do not know if this is an appropriate place for site-mechanic suggestions, or where to find prior art. Is there a dedicated place for this?
Meta: I am new and do not know if this is an appropriate place for site-mechanic suggestions, or where to find prior art. Is there a dedicated place for this?
This is a good place! There isn’t a super central repository for this. You can take a look at the Site Meta and LW Moderation tags to find other posts in the same reference class.
This is bad. The point of voting is to give an easy way of aggregating information about the quality and reception of content. When voting ends up dominated by a small interest[10] group without broader site buy-in, and with no one being able to tell that is what’s going on, it fails at that goal. And in this case, it’s distorting people’s perception about the site consensus in particularly high-stakes contexts where authors are trying to assess what people on the site think about their content, and about the norms of posting on LessWrong.
I’d like you to consider removing votes entirely, to be subsumed entirely by reacts. These allow more nuance and are importantly not anonymous. I believe this is importantly more similar to how humans in the ancestral environment would think about and judge community contributions, in ways that are conducive to good epistemics and incentives. (There are also failure modes that would be important to think about, such as a ‘seal of approval’ dynamic.)
Aggregating this well for the purposes of sorting and raising to attention would be tricky, but seems plausibly doable and worth it to me.
However, I expect that this is already something you have thought about a lot more than I have and have apparently not decided to do, so I am also curious to hear why not.
I have some suggestions for mechanistic improvements to the LW website that may help alleviate some of the issues presented here.
RE: Comment threads with wild swings in upvotes/downvotes due to participation from few users with large vote-weights; a capping/scaling factor on either total comment karma or individual vote-weights could solve this issue. An example total-karma-capping mechanism would be limiting the absolute value of the displayed karma for a comment to twice its parent’s karma. An example vote-weight-capping mechanism would be limiting vote weight to the number of votes on a comment. The total-cap mechanism seems easier to implement if LW just records the total karma for a comment rather than maintaining the set of all votes on it. Any mechanism like those described has some issues though, including the possibility of users voting on something but not seeing the total karma change at all.
RE: Post authors (and commenters) not having enough information about the behavior of specific commenters when deciding whether/how to engage with them, and the cruelty of automatically attaching preemptive dismissals to comments; it does not seem more cruel to publicly tag a user’s comments with a warning box saying “critiques from this user are usually not substantive/relevant” than to ban them. This turns hard-censorship into soft-censorship, which seems less dangerous to me, and also like it could be more easily applied by moderators without requiring hundreds of hours of deliberation.
RE: Going after only the most legible offender(s) rather than the worst one(s); Giving users and moderators the ability to mark a commenters interactions throughout a thread as “overall unproductive/irrelevant/corrosive/bad-faith” in a way that allows users to track who they’ve had bad interactions with in the past, and allows moderators better visibility into who is behaving badly even when they have not personally seen the bad behavior (with the built-in bonus of marking examples). These marks should only be visible to the user assigning them and to moderators for what I think are obvious reasons. A more general version of this system would be the ability to assign tags to users for a specific comment/chain (e.g. “knowledgeable about African history”, “bad-faith arguer” that link back to the comment which inspired the tag. Such a system is useful for users who have a hard time remembering usernames, but could also unfortunately result in ignoring good arguments from people after a single bad interaction.
Meta: I am new and do not know if this is an appropriate place for site-mechanic suggestions, or where to find prior art. Is there a dedicated place for this?
This is a good place! There isn’t a super central repository for this. You can take a look at the Site Meta and LW Moderation tags to find other posts in the same reference class.
I’d like you to consider removing votes entirely, to be subsumed entirely by reacts. These allow more nuance and are importantly not anonymous. I believe this is importantly more similar to how humans in the ancestral environment would think about and judge community contributions, in ways that are conducive to good epistemics and incentives. (There are also failure modes that would be important to think about, such as a ‘seal of approval’ dynamic.)
Aggregating this well for the purposes of sorting and raising to attention would be tricky, but seems plausibly doable and worth it to me.
However, I expect that this is already something you have thought about a lot more than I have and have apparently not decided to do, so I am also curious to hear why not.
Many people would be much less inclined to vote if it was fully public, so you would lose a lot of signal.
would the signal to (noise + adversarial signal) improve?
edit: thinking about it more, I’m unsure, seems plausible the answer is no. (react was added before edit)