Meta-note: Right now, as I check the top comments for today, all the top comments for today are replies to heavily downvoted comments. This is the behavior the downvoted-thread-killer was meant to prevent, but we don’t yet have “troll-toll all descendants” feature. Noting this because multiple people asked for examples and how often something like it happened.
The eridu-generated threads show that the direct reply toll doesn’t seem to work, or at least it didn’t in this case. I still don’t like the idea of the indiscriminate whole-thread toll, but I’m no longer expecting the current alternative to be effective.
I’ve thought of another option: maybe prohibit a user from posting anywhere in a subthread under any significantly-downvoted comments of their own? This is another feature of all bad threads that could be used to automatically recognize them: the user in a failure mode keeps coming back to the same thread, so if this single user is prohibited from doing so, this seems to be sufficient.
I still don’t like the idea of the indiscriminate whole-thread toll
It looks like that idea has already been replaced with hiding subthreads rooted on comments that are −3 or lower from recent and top comments.
I like the idea of hiding bad subthreads, but wish it’s a manual moderator action instead of based on votes. A lot of discussions that descend from downvoted comments are perfectly fine and do not need to be hidden.
I’ve thought of another option: maybe prohibit a user from posting anywhere in a subthread under any significantly-downvoted comments of their own?
I don’t think that’s a good idea. What if its a non-troll user who just made a bad comment? They wouldn’t be able to come back and admit their mistake or clarify their argument. An actual troll on the other hand could just make a new account and keep going in that thread.
hiding subthreads rooted on comments that are −3 or lower from recent and top comments.
I endorse this, incidentally. (Not that there’s any particular reason for anyone to care, but I’ve expressed my opposition to various other suggestions, so it seems only fair to express my endorsement as well.)
I also share the belief that automatic actions are more likely to apply in situations their coders would not endorse. That said, I also endorse the desire to reduce the workload on administrators. (And I appreciate the desire to diffuse social pressure on those administrators to avoid or reverse the action, though I’m more conflicted about whether I endorse that.)
It looks like that idea has already been replaced with hiding subthreads rooted on comments that are −3 or lower from recent and top comments.
I just noticed that cousin_it suggested this last year. Also, Eliezer asked:
Does anyone have any strong reasons why LW is better off six months from now if there’s a preference option instead of just an automatic behavior to hide such comments? If not, I would just like to see the behavior.
If anyone can think of a strong reason, they should probably follow the link above and comment there.
Thanks for the link. I don’t expect that filtering of what’s presented is a good strategy, as it aims at shaping the perception of the community culture, not at shaping the culture itself. It’s more important to shape the culture, and perception can’t be automatically filtered in a way that presents a picture that’s significantly different from the unfiltered picture (for some sense of “significantly”).
I think the idea is that if people don’t see new replies to the hidden subthread in recent comments, they’ll be much less likely to respond to those replies, so such threads will die out much more quickly. This will also cause trolls to not have as much fun trolling here so they’ll be more likely to leave us alone in the future.
ETA: On the other hand, perhaps we should talk about non-technical ways to change the culture as well. Do you have any ideas?
ETA2: A lot of previous discussion can be found here.
I’ll observe that this will also prevent the “Huh. Can someone explain why this comment has been so heavily downvoted?” sorts of comments, as well as the “Oh. I now see what was wrong with my comment, thanks all” sorts of comments. Or, rather, it will prevent those comments from appearing where they would naturally go in a thread. Of course this won’t necessarily prevent people from making the same comments they’re making now, it will just prevent them from doing so in that location.
These might or might not be good things.
More generally, I’m interested in what results you expect from implementing such an option. It would be good to record that somewhere before making a change, so we can subsequently establish whether the change had the desired results.
I’m also curious in what ways you expect those results to compare to giving mods the power to freeze a comment tree (that is, identify a comment and not allow further comments to be made downstream of it by anyone) when they consider it appropriate. But that’s more of a personal curiosity.
I’ll observe that this will also prevent the “Huh. Can someone explain why this comment has been so heavily downvoted?” sorts of comments
I thought of that, but there doesn’t appear to be a way of automatically separating these cases. Such questions could be edited-in in the downvoted comment itself, or included in a separately posted improved reframing of the content of the downvoted comment.
what results you expect from implementing such an option
This would make bad threads of the currently typical form literally impossible to construct, so it’s at least an interesting experiment. The successful outcome is for the downvoted conversations to peter out faster due to the inconvenience of having to find new starting points that are not replies to preceding conversations. I expect the worst that could happen is that instead of the nice orderly Big Bad Threads we’ll have a deluge of bad comments scattered all over the place.
I’m also curious in what ways you expect those results to compare to giving mods the power to freeze a comment tree
This variant of blocking only the downvoted user’s comments seems better on most counts, as it doesn’t have the downside of indiscriminate blocking which motivated the need for human judgment, it’s automatic and so won’t focus complaints as much, it seems to catch all the same threads that a human moderator might close, and it applies faster.
I suspect that if the goal is to make bad threads peter out faster, preventing all users from contributing to a bad thread will likely achieve that goal more readily than preventing one user from doing so.
We could even do that automatically if we wanted. For my own part I trust humans more than simple automatic pattern-matchers for this sort of thing, but if y’all prefer automatic pattern-matchers to diffuse the resulting complaints that’s an option as well.
Of course, if we’re OK with automatically blocking the downvoted user on the thread but not OK with automatically blocking other users on the thread, then an automatic branch-freeze won’t work. This might be true if there are other as-yet-unstated goals being addressed, beyond the desire to end the thread itself.
Personally, I don’t like the idea of letting everyone post on a thread except the person they are responding to; one-sided conversations make my teeth itch.
I was one of those who asked for examples. This is indeed a good example, and I take it to heart. I am still uncertain what the effect of the new and planned rules will be (troll feeding fee etc.). But it’s now less a case of “what problem are you trying to solve?” and more “how should we solve this problem?”
In more detail: I missed this thread, but skimming the remaining comments, I think it would have been a waste of time to participate. But since many others did participate (while saying in many comments that eridu was quite irrational and/or wrong), it’s possible I would have been drawn in if I had the opportunity. So I’m glad you stopped it.
At the time I make this reply, DanArmak’s comment was downvoted (I voted it back up). Downvoting a comment like that above is the sort of reason why I am starting to distrust the behavior of meta-threads as a reliable signal of what the community thinks.
But since many others did participate (while saying in many comments that eridu was quite irrational and/or wrong), it’s possible I would have been drawn in if I had the opportunity. So I’m glad you stopped it.
… and read “It’s obvious that eridu is stupid and irrational, and people said so yet kept blabbering and that could have made me join in, so thanks for stopping all this idiocy.”
It actually tempted me to downvote too, but the comment is overall useful and that is a very uncharitable interpretation of the wording. It’s simply not true that it was a waste of time for everyone—each of my comments and each response to them made me learn something and helped me do a few updates.
It was also a very good opportunity for me to review my own cached database on gender-unfairness in this particular case, which I hadn’t done yet since way before learning all this cool stuff about rationality I learned on LessWrong. Overall, I came out winning from that thread, regardless of whether it was started by a troll or not (the alternative was being bored and brainkilled to death by my boring and mind-killing-filled day job). So, for me, and maybe a few others, the above statement about eridu and the thread rings untrue, though not completely unjustified in retrospect.
read “It’s obvious that eridu is stupid and irrational, and people said so yet kept blabbering and that could have made me join in, so thanks for stopping all this idiocy.”
I haven’t seen eridu’s comments myself. I can make no real judgement on their quality. My comment was based solely on the comments of other people in the thread. And the gist of most of those comments is that eridu was being irrational and wrong.
However, now that you point it out, it seems wrong for me to wish to restrict other people’s conversations. I would prefer to simply ignore such conversations, but I don’t trust myself to do so reliably. Selfishly, I might wish for moderators to ban such conversations, but the moderators’ preferences on what to ban don’t always coincide with mine or other users’.
A better technical solution might help. I don’t have enough experience with other forums to make good predictions on what different features might lead do.
it seems wrong for me to wish to restrict other people’s conversations
Do you mean in general, or do you mean in a particular forum?
If the latter: there are all kinds of conversations I wish to restrict on this particular forum. Most of them don’t in fact happen here, but if they started to I would leave. Some of them do happen here, and I grit my teeth and do my best to ignore them, and I downvote them to communicate my preference.
I mean conversations on LW, yes. And yes there are conversations, which are few in practice, that I wouldn’t wish to happen even if I was oblivious to them. Like anything that harms people.
But the subject I was discussing was conversations that bothered me when I saw them, not just in themselves (then I might vote or reply to influence them), but by tempting me to participate in a something I would later regret as a waste of time. E.g., an unproductive argument, troll-baiting, bad argumentation or rationality, and other things of that sort. Hence Eliezer’s new rules which are intended to more quickly shutdown downvoted conversations—although I disagree with the method, I tentatively agree with the goal.
However, I don’t want to stop others from having conversations that I don’t like merely because they e.g. use poor arguments or defend completely wrong positions. It would best for conversations to happen, just without bothering me. I don’t know if this can be achieved in practice.
Some of them do happen here, and I grit my teeth and do my best to ignore them
Of course I can’t be sure that the conversations that affect you that way are the same ones that affect me that way. So could you say which ones you mean?
It would best for conversations to happen, just without bothering me
Why would that be better than the conversations not happening here at all?
So could you say which ones you mean?
I would prefer not to point to specific threads. Generally speaking, what most irritates me is exchanges where we talk past each other in long comments without ever quite engaging with each others’ main points, and threads where we don’t really engage one another at all but rather all try to show off how individually clever we are.
Well, OK, but… let me back up a bit here, because I’m now confused.
You’ve said that you’re talking about conversations that bother you by tempting you to participate in them, and you’ve (tentatively) endorsed the goal of shutting those conversations down. But you’ve also said you endorse allowing conversations to continue if people want those conversations. And it seems implicit in the whole conversation that you’re treating peoples’ participation in conversations as evidence that they want those conversations.
It seems that those three sentences describe an internally inconsistent set of desires… that is, if they were true of me, there would exist conversations C such that I both want C shut down and do not want C shut down.
Which, OK, that sort of goal-conflict is certainly a thing that happens to human brains, it happens to me all the time, and if that’s what’s going on then I understand my confusion about it and no further clarification is necessary. (Or, well, more accurate is to say I consider no further clarification likely.)
But if that’s not what’s going on then I’m confused.
First, at least some of the other people in these conversation say that unlike myself, they really want to participate in them and it’s not a temptation they would want to avoid.
Second, I would prefer those conversations to exist (since others want them) if they could exist in a way that would not tempt me to join in, as it does now. Of which I said that I don’t know if that goal could be achieved in practice (except by moving these conversations to a completely separate site, obviously).
As long as that goal is not achieved at least partially, I recognize there’s a problem (for myself) with having these conversations here on LW. And I tentatively welcome changes to the LW rules that try to fix this, even though I am uncertain if the specific changes being implemented will not have other, worse, negative effects as well.
Yes there are conflicting goals here, but I am explicitly balancing them.
Let’s say that I post comment B in response to comment A. Comment A has 0 karma, so I suffer no karma penalty. Five minutes afterward, however, various other users downvote comment A to −5. Would I be karma-taxed retroactively ? How would this affect comment B’s rating ? If the answers are “no” and “it wouldn’t”, that could explain the present situation.
I wonder if there can be a race condition, when a comment is started before its parent is downvoted to −3, but submitted after, resulting in an unexpected karma burn.
I guess a workaround would be to open the parent in another window and check its vote before hitting “comment”… And if it is already at −2, maybe think a bit first :)
I hope that this half-assed mis-implementation gets fixed eventually. Incidentally, my earlier suggestion to only apply karma burn when the offending comment’s author has negative monthly karma would largely take care of the race condition as well, if the warning message pops up based on the monthly karma. Something along the lines of “do you really think it’s a good idea to reply to someone with negative karma?”
Yeah, that sounds like a much better solution than what we’ve got. Your workaround should also work—and would be made a bit more safe by applying the reversible vote trick, though that’s a borderline exploit—but I wouldn’t be surprised to find other issues; the different parts of the karma system here don’t always synchronize perfectly.
“No” and “It wouldn’t”, indeed. But heritable penalties once something does go to −3 would prevent users with zero or lower karma from replying further, thus preventing the current thread from happening again.
I don’t think “preventing the current thread from happening again” is anywhere near an important enough goal to justify heritable karma penalties—let alone retroactive ones.
I’ve not seen retroactive penalties proposed anywhere; the current system warns you when you start if a penalty applies for making a comment, presumably that wouldn’t change.
An alternative possibility, that may have the same or a similar effect, is to auto-close the children of heavily downvoted posts when they appear on the “Recent Comments” window. Adding an extra step to reply to such a post will tend to reduce the number of replies that is gets, and will clearly signal to the reader that the post is, in fact, the child of a heavily downvoted post.
I have no idea if this possibility will be better or worse than the heritable penalties (nor, for that matter, which option would be easier to implement).
Could we change the “Recent Comments” box to say “Recent Threads”, instead, with a count of updated comments, net karma, and most recent poster for each thread as usual ? For example, something like this:
EliezerYudkowsky on Meta-note: Right now… by EliezerYudkowsky on The Worst Argument In The World | 7k, 2 new Mugbuster on You all smell… by Obvious_Troll on The Worst Argument In The World | −15k, 18 new
This tells me that Eliezer commented on a thread that he started, and the thread is generally positively rated, though low-volume, so I might click it. On the other hand, Mugbuster commented on a high-volume thread that has cumulative −15 karma, which means that it’s probably a trolling thread, and I should stay out of it.
Also, to reply to a comment elsewhere in thread, obviously penalties are not going to be charged retrospectively if an ancestor later goes to −3. Nobody has proposed this. Navigating the LW rules is not intended to require precognition.
Meta-note: Right now, as I check the top comments for today, all the top comments for today are replies to heavily downvoted comments. This is the behavior the downvoted-thread-killer was meant to prevent, but we don’t yet have “troll-toll all descendants” feature. Noting this because multiple people asked for examples and how often something like it happened.
The eridu-generated threads show that the direct reply toll doesn’t seem to work, or at least it didn’t in this case. I still don’t like the idea of the indiscriminate whole-thread toll, but I’m no longer expecting the current alternative to be effective.
I’ve thought of another option: maybe prohibit a user from posting anywhere in a subthread under any significantly-downvoted comments of their own? This is another feature of all bad threads that could be used to automatically recognize them: the user in a failure mode keeps coming back to the same thread, so if this single user is prohibited from doing so, this seems to be sufficient.
It looks like that idea has already been replaced with hiding subthreads rooted on comments that are −3 or lower from recent and top comments.
I like the idea of hiding bad subthreads, but wish it’s a manual moderator action instead of based on votes. A lot of discussions that descend from downvoted comments are perfectly fine and do not need to be hidden.
I don’t think that’s a good idea. What if its a non-troll user who just made a bad comment? They wouldn’t be able to come back and admit their mistake or clarify their argument. An actual troll on the other hand could just make a new account and keep going in that thread.
A trivial low-cost solution, roundly ignored by EY and the rest of the forum management.
A related quote:
“Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” -- Howard Aiken
If you want to try harder at this “ramming”, you could follow the link I posted above and present your idea there as a comment. :)
Done.
I endorse this, incidentally. (Not that there’s any particular reason for anyone to care, but I’ve expressed my opposition to various other suggestions, so it seems only fair to express my endorsement as well.)
I also share the belief that automatic actions are more likely to apply in situations their coders would not endorse. That said, I also endorse the desire to reduce the workload on administrators. (And I appreciate the desire to diffuse social pressure on those administrators to avoid or reverse the action, though I’m more conflicted about whether I endorse that.)
I just noticed that cousin_it suggested this last year. Also, Eliezer asked:
If anyone can think of a strong reason, they should probably follow the link above and comment there.
Thanks for the link. I don’t expect that filtering of what’s presented is a good strategy, as it aims at shaping the perception of the community culture, not at shaping the culture itself. It’s more important to shape the culture, and perception can’t be automatically filtered in a way that presents a picture that’s significantly different from the unfiltered picture (for some sense of “significantly”).
I think the idea is that if people don’t see new replies to the hidden subthread in recent comments, they’ll be much less likely to respond to those replies, so such threads will die out much more quickly. This will also cause trolls to not have as much fun trolling here so they’ll be more likely to leave us alone in the future.
ETA: On the other hand, perhaps we should talk about non-technical ways to change the culture as well. Do you have any ideas? ETA2: A lot of previous discussion can be found here.
I’d prefer the subthread to be outright locked than this. (I only very mildly oppose the latter but the former would be abhorrent.)
I’ll observe that this will also prevent the “Huh. Can someone explain why this comment has been so heavily downvoted?” sorts of comments, as well as the “Oh. I now see what was wrong with my comment, thanks all” sorts of comments.
Or, rather, it will prevent those comments from appearing where they would naturally go in a thread. Of course this won’t necessarily prevent people from making the same comments they’re making now, it will just prevent them from doing so in that location.
These might or might not be good things.
More generally, I’m interested in what results you expect from implementing such an option. It would be good to record that somewhere before making a change, so we can subsequently establish whether the change had the desired results.
I’m also curious in what ways you expect those results to compare to giving mods the power to freeze a comment tree (that is, identify a comment and not allow further comments to be made downstream of it by anyone) when they consider it appropriate. But that’s more of a personal curiosity.
I thought of that, but there doesn’t appear to be a way of automatically separating these cases. Such questions could be edited-in in the downvoted comment itself, or included in a separately posted improved reframing of the content of the downvoted comment.
This would make bad threads of the currently typical form literally impossible to construct, so it’s at least an interesting experiment. The successful outcome is for the downvoted conversations to peter out faster due to the inconvenience of having to find new starting points that are not replies to preceding conversations. I expect the worst that could happen is that instead of the nice orderly Big Bad Threads we’ll have a deluge of bad comments scattered all over the place.
This variant of blocking only the downvoted user’s comments seems better on most counts, as it doesn’t have the downside of indiscriminate blocking which motivated the need for human judgment, it’s automatic and so won’t focus complaints as much, it seems to catch all the same threads that a human moderator might close, and it applies faster.
OK, thanks.
I suspect that if the goal is to make bad threads peter out faster, preventing all users from contributing to a bad thread will likely achieve that goal more readily than preventing one user from doing so.
We could even do that automatically if we wanted. For my own part I trust humans more than simple automatic pattern-matchers for this sort of thing, but if y’all prefer automatic pattern-matchers to diffuse the resulting complaints that’s an option as well.
Of course, if we’re OK with automatically blocking the downvoted user on the thread but not OK with automatically blocking other users on the thread, then an automatic branch-freeze won’t work. This might be true if there are other as-yet-unstated goals being addressed, beyond the desire to end the thread itself.
Personally, I don’t like the idea of letting everyone post on a thread except the person they are responding to; one-sided conversations make my teeth itch.
I was one of those who asked for examples. This is indeed a good example, and I take it to heart. I am still uncertain what the effect of the new and planned rules will be (troll feeding fee etc.). But it’s now less a case of “what problem are you trying to solve?” and more “how should we solve this problem?”
In more detail: I missed this thread, but skimming the remaining comments, I think it would have been a waste of time to participate. But since many others did participate (while saying in many comments that eridu was quite irrational and/or wrong), it’s possible I would have been drawn in if I had the opportunity. So I’m glad you stopped it.
At the time I make this reply, DanArmak’s comment was downvoted (I voted it back up). Downvoting a comment like that above is the sort of reason why I am starting to distrust the behavior of meta-threads as a reliable signal of what the community thinks.
It’s easy to see:
… and read “It’s obvious that eridu is stupid and irrational, and people said so yet kept blabbering and that could have made me join in, so thanks for stopping all this idiocy.”
It actually tempted me to downvote too, but the comment is overall useful and that is a very uncharitable interpretation of the wording. It’s simply not true that it was a waste of time for everyone—each of my comments and each response to them made me learn something and helped me do a few updates.
It was also a very good opportunity for me to review my own cached database on gender-unfairness in this particular case, which I hadn’t done yet since way before learning all this cool stuff about rationality I learned on LessWrong. Overall, I came out winning from that thread, regardless of whether it was started by a troll or not (the alternative was being bored and brainkilled to death by my boring and mind-killing-filled day job). So, for me, and maybe a few others, the above statement about eridu and the thread rings untrue, though not completely unjustified in retrospect.
I haven’t seen eridu’s comments myself. I can make no real judgement on their quality. My comment was based solely on the comments of other people in the thread. And the gist of most of those comments is that eridu was being irrational and wrong.
However, now that you point it out, it seems wrong for me to wish to restrict other people’s conversations. I would prefer to simply ignore such conversations, but I don’t trust myself to do so reliably. Selfishly, I might wish for moderators to ban such conversations, but the moderators’ preferences on what to ban don’t always coincide with mine or other users’.
A better technical solution might help. I don’t have enough experience with other forums to make good predictions on what different features might lead do.
Do you mean in general, or do you mean in a particular forum?
If the latter: there are all kinds of conversations I wish to restrict on this particular forum. Most of them don’t in fact happen here, but if they started to I would leave. Some of them do happen here, and I grit my teeth and do my best to ignore them, and I downvote them to communicate my preference.
What’s wrong with that?
I mean conversations on LW, yes. And yes there are conversations, which are few in practice, that I wouldn’t wish to happen even if I was oblivious to them. Like anything that harms people.
But the subject I was discussing was conversations that bothered me when I saw them, not just in themselves (then I might vote or reply to influence them), but by tempting me to participate in a something I would later regret as a waste of time. E.g., an unproductive argument, troll-baiting, bad argumentation or rationality, and other things of that sort. Hence Eliezer’s new rules which are intended to more quickly shutdown downvoted conversations—although I disagree with the method, I tentatively agree with the goal.
However, I don’t want to stop others from having conversations that I don’t like merely because they e.g. use poor arguments or defend completely wrong positions. It would best for conversations to happen, just without bothering me. I don’t know if this can be achieved in practice.
Of course I can’t be sure that the conversations that affect you that way are the same ones that affect me that way. So could you say which ones you mean?
Why would that be better than the conversations not happening here at all?
I would prefer not to point to specific threads. Generally speaking, what most irritates me is exchanges where we talk past each other in long comments without ever quite engaging with each others’ main points, and threads where we don’t really engage one another at all but rather all try to show off how individually clever we are.
Because it would be better for others to have the conversations they want, and the same to me if I were not bothered.
Well, OK, but… let me back up a bit here, because I’m now confused.
You’ve said that you’re talking about conversations that bother you by tempting you to participate in them, and you’ve (tentatively) endorsed the goal of shutting those conversations down. But you’ve also said you endorse allowing conversations to continue if people want those conversations. And it seems implicit in the whole conversation that you’re treating peoples’ participation in conversations as evidence that they want those conversations.
It seems that those three sentences describe an internally inconsistent set of desires… that is, if they were true of me, there would exist conversations C such that I both want C shut down and do not want C shut down.
Which, OK, that sort of goal-conflict is certainly a thing that happens to human brains, it happens to me all the time, and if that’s what’s going on then I understand my confusion about it and no further clarification is necessary. (Or, well, more accurate is to say I consider no further clarification likely.)
But if that’s not what’s going on then I’m confused.
First, at least some of the other people in these conversation say that unlike myself, they really want to participate in them and it’s not a temptation they would want to avoid.
Second, I would prefer those conversations to exist (since others want them) if they could exist in a way that would not tempt me to join in, as it does now. Of which I said that I don’t know if that goal could be achieved in practice (except by moving these conversations to a completely separate site, obviously).
As long as that goal is not achieved at least partially, I recognize there’s a problem (for myself) with having these conversations here on LW. And I tentatively welcome changes to the LW rules that try to fix this, even though I am uncertain if the specific changes being implemented will not have other, worse, negative effects as well.
Yes there are conflicting goals here, but I am explicitly balancing them.
Let’s say that I post comment B in response to comment A. Comment A has 0 karma, so I suffer no karma penalty. Five minutes afterward, however, various other users downvote comment A to −5. Would I be karma-taxed retroactively ? How would this affect comment B’s rating ? If the answers are “no” and “it wouldn’t”, that could explain the present situation.
I wonder if there can be a race condition, when a comment is started before its parent is downvoted to −3, but submitted after, resulting in an unexpected karma burn.
Yes. That happened to me yesterday; not only does it produce karma loss, but the warning message doesn’t pop up.
I guess a workaround would be to open the parent in another window and check its vote before hitting “comment”… And if it is already at −2, maybe think a bit first :)
I hope that this half-assed mis-implementation gets fixed eventually. Incidentally, my earlier suggestion to only apply karma burn when the offending comment’s author has negative monthly karma would largely take care of the race condition as well, if the warning message pops up based on the monthly karma. Something along the lines of “do you really think it’s a good idea to reply to someone with negative karma?”
Yeah, that sounds like a much better solution than what we’ve got. Your workaround should also work—and would be made a bit more safe by applying the reversible vote trick, though that’s a borderline exploit—but I wouldn’t be surprised to find other issues; the different parts of the karma system here don’t always synchronize perfectly.
A related note: You can sometimes get around the karma burn by upvoting a comment that’s at −3, commenting, and then reversing your upvote after.
“No” and “It wouldn’t”, indeed. But heritable penalties once something does go to −3 would prevent users with zero or lower karma from replying further, thus preventing the current thread from happening again.
I don’t think “preventing the current thread from happening again” is anywhere near an important enough goal to justify heritable karma penalties—let alone retroactive ones.
I’ve not seen retroactive penalties proposed anywhere; the current system warns you when you start if a penalty applies for making a comment, presumably that wouldn’t change.
Yep. Nobody was proposing retroactive.
An alternative possibility, that may have the same or a similar effect, is to auto-close the children of heavily downvoted posts when they appear on the “Recent Comments” window. Adding an extra step to reply to such a post will tend to reduce the number of replies that is gets, and will clearly signal to the reader that the post is, in fact, the child of a heavily downvoted post.
I have no idea if this possibility will be better or worse than the heritable penalties (nor, for that matter, which option would be easier to implement).
Could we change the “Recent Comments” box to say “Recent Threads”, instead, with a count of updated comments, net karma, and most recent poster for each thread as usual ? For example, something like this:
EliezerYudkowsky on Meta-note: Right now… by EliezerYudkowsky on The Worst Argument In The World | 7k, 2 new
Mugbuster on You all smell… by Obvious_Troll on The Worst Argument In The World | −15k, 18 new
This tells me that Eliezer commented on a thread that he started, and the thread is generally positively rated, though low-volume, so I might click it. On the other hand, Mugbuster commented on a high-volume thread that has cumulative −15 karma, which means that it’s probably a trolling thread, and I should stay out of it.
That one’s in progress, I think.
Also, to reply to a comment elsewhere in thread, obviously penalties are not going to be charged retrospectively if an ancestor later goes to −3. Nobody has proposed this. Navigating the LW rules is not intended to require precognition.
Well, it was required when (negative) karma for Main articles increased tenfold.
Yes, or when downvotes were limited without warning.