This opens up a new aspect of downvoting, which I’ve just now tried out, and will describe in the interest of full disclosure: you can “swim up” the chain of comment parents until you find one that is at −3, and by downvoting that cause the entire downthread discussion to be effectively censored.
Swimming upthread is something I do quite often in the course of trying to understand what sparked a particular controversy—I’m often dismayed to see that these are tangents that had nothing to do with the original question being investigated and not a whole lot to do with rationality.
This comment by Wei Dai was the trigger for my looking to use this tactic (it felt like it belonged in a low-overall-value discussion of the kind I’d like to see less of), showing up at the top of Recent comments.
No less than eight levels above was this comment by wedrifid, sitting at −3, with a total of 38 children comments. Downvoting it (without the slightest qualm, given the first non-quoted words were a rhetorical “How dare you” that I strongly prefer not to see around here) did in fact cause Wei Dai’s comment to disappear from Recent. (Here’s the starting point of the whole subthread.)
So, that’s one (possibly unexpected) consequence of the new rule. Good? Bad? I haven’t formed an opinion yet.
(Some disclaimers: I have no particular antipathy toward either Wei Dai or wedrifid, nor did I allow myself to develop a particular attachment to either “side” in that particular controversy, given that the appearance of “sides” at all didn’t strike me as particularly productive. I’m aware that my commenting on this may negate the censorship consequences on this particular discussion, but it seemed to me that bringing this out in the open had greater expected value than just quietly censoring one subthread and retaining the power to do it again on other occasions.)
I have no particular antipathy toward either Wei Dai or wedrifid, nor did I allow myself to develop a particular attachment to either “side” in that particular controversy, given that the appearance of “sides” at all didn’t strike me as particularly productive.
Not productive in the slightest. In fact I would happily downvote my own comment (despite reflectively endorsing it) just to hide the entire pointless load of tripe.
Yep, there’s some of my own comments I wish I could downvote for the same reason.
Really? This is a little surprising but only in a purely logistical sense. You don’t tend to be in situations where that can be effective. Voting on your comments is more extreme than with most so whenever your comments form part of an unproductive conversation they already tend to be downvoted way below the threshold where less prominent users who draw less attention may only have reached −2 or −3. For this reason I suspect the current implementation handles this for you with requiring your noble self-sacrifice.
(Pardon me if I’m just being too literal and you meant “would wish to be able to downvote”. The prominence and popularization factor is just what popped into my head following the “that would be redundant” thought.)
This is likely the point of the rule: to discourage otherwise-high-quality comments that might inspire a wave of crappy ones.
Yes, and I like it (a lot). Especially now that the comments are hidden. When the comments were still visible it was more necessary to reply (so that errors aren’t accepted without correction). Now the (presumably, more often than not) bad replies don’t require high-quality refutation because they are invisible to those who don’t seek them out. The penalty to comment replies has very little downside.
I generally don’t read deeply nested comments (except when I load the Recent Comments page, which shows me everything without knowing how deep it is). I find they’re rarely worth it, especially when it’s just two people going hammer and tongs at each other. Even if one of the two people is me.
On reflection, I think I got a bit frustrated towards the end of my discussion with wedrifid, and lost some of my “cool”, but overall I would say that the discussion has been productive at least for me, given the inherent difficulties in human communications (and the (still mysterious-to-me) refusal on wedrifid’s part to answer many of my questions). While the information I got wasn’t what I set out to obtain at the start, nevertheless what I got is useful. For example I’ve learned that there are a number of forum behaviors that he considers undesirable and is willing to “punish” (which he apparently means in a somewhat technical sense):
rhetorical questions aimed at convincing the audience (and not hedging/indicating uncertainty)
inferring (“mind-reading”) negative motives or toxic beliefs in others and then stating them publicly in order to shame
quoting others out of context in order to making them look bad (this one was actually learned previously, but I’m including it here for completeness sake)
To be clear, naturally I don’t disagree that these behaviors are bad but think wedrifid tends err in the direction of judging too many people guilty. Regardless, at least in the future I can be more careful about my uses of rhetorical questions, inference of motives and beliefs, and quoting (e.g., do not use them unless I’m extremely confident that their actual and intended effects won’t be misunderstood) and hope to avoid some of the “punishments” that way.
It may be that in retrospect the amount of useful information exchanged seems really small compared to the amount of text exchanged. I think in part that’s due to hindsight bias and illusion of transparency that makes us think communication is easier than it really is, but almost certainly there are also things we could have done better, that would have made the exchange go more smoothly and efficiently. If anyone has any suggestions in that regard, I think (at least speaking for myself) they would be very much welcomed.
I wrote a few here, then stored them away: I want to hold off on proposing solutions. Let’s discuss the problem instead.
What started the whole thing was a question asked by komponisto, presumably intended to get at some interesting aspect of the object-level discussion, but which rapidly went meta (not “meta” in the sense of discussing LW, but “meta” in the sense of discussing the discussion).
Going meta isn’t the problem, per se. Losing track of the object-level inquiry altogether, while the meta discussion explodes into a 167-comment beast from a one-word comment? Yes, I think that qualifies.
The original comment which led to the explosion is upvoted at +8. (That’s one way the “technical” fix of censoring descendants of highly downvoted comments might be missing its target, not so much low-quality comments as polarizing, i.e. trollish, comments.)
The thread rapidly hits the limit of reply nesting (10 levels), so that only a portion of it can be seen simultaneously with the original exchange (komponisto’s question and nsheppard’s one-word reply). Your replies, for instance, appear only on page 2. It’s a safe assumptions that readers who are coming across your replies have lost the original context, unless they were involved in the controversy from the start.
On this first page, several of wedrifid’s comments—and only wedrifid’s—are highly downvoted. This further reinforces the hypothesis that the thread is polarizing and information cascades are taking place.
Reading your first intervention requires loading page 2 of the thread, and reading through to the bitter end requires one more page. This is way beyond what adds value to most LW readers except the most dedicated, and reminds me of the admonitions against thread mode.
Starting from your first intervention, the pattern becomes mostly a “ping-pong” one of you and wedrifid going back and forth. Only one other commenter is active on page 2 of the thread (TheOtherDave). A few others pipe up on page 3, but I suspect that by that point these are people being dragged into the conversation (from Recent Comments) because it has started to resemble a flamefest.
Between page 2 and 3, the discussion has drifted from “meta” in the sense of discussion-on-discussion, to “meta” in the sense of discussion-about-who-downvotes-what, i.e. into slime-dripping cancer territory.
Yes, Eliezer’s “cancer” pronouncement is downvoted and ironically buried in a thread that has several ancestor comments which are Eliezer’s and highly downvoted. It nevertheless captures a key truth: extended discussions of the game-theoretical aspects of the filtering features of LW do not have much potential to generate useful inferences from true beliefs. (Or stated more succinctly: most meta-discussion is neither epistemically nor instrumentally rational.)
I do think there is value in “meta” in the sense of discussion-about-discussion, however, and in particular in discussion of community norms, and I agree with your assessment of your own contributions.
That’s about as much as I can say without starting to make recommendations.
the thread is polarizing and information cascades are taking place.
checking my understanding of this telegraphic little clause:
polarizing: those who invest the effort in following the argument will tend to pick a side they like best and vote accordingly?
information cascade: without realizing it, or, knowingly forgoing their own deep evaluation, people affiliate themselves with the winning side, piling on extra, uninformative, votes?
This may be a stupid question, but… why do you want to avoid “punishment” (in the technical sense you reference here)?
My tentative understanding is that “labeling something and calling it undesirable” is only one form of “punishment” that fits wedrifid’s definition, and that if I ignore his milder punishments, he may escalate to more severe forms. (I started putting an example of what I think may be one of his more severe forms of punishment, but removed it in case he considers it to be either quoting out of context or mind-reading.)
My expectation is that in most cases when I’m punished I will consider myself innocent but also have some doubt (e.g., perhaps I am biased about my self-assessment or just missing something obvious). I may be tempted to defend myself or ask wedrid to explain his reasons, which may cause more discussions that others consider unproductive, as well as frustration to myself if I fail to resolve the doubt.
Perhaps a way to make this work would be to automatically unhide downstream comments whose upvotes are greater than the sum of the downvotes of all its negative-karma parents? In that way, a good (ie. high-karma) discussion can’t be killed by a low-karma parent thread so easily.
you can “swim up” the chain of comment parents until you find one that is at −3, and by downvoting that cause the entire downthread discussion to be effectively censored.
That only works when you have large discussions under downvoted comments, which should become much less common now.
Care to translate that “should” into a well-specified forecast with attached probability? :)
We are assuming that the change will affect comments at −4 or lower, but it might not change the number of large discussions under comments at −3. There might be discussions that transition between censored and uncensored. The censorship might actually prevent comments at −4 from being downvoted even worse, and thus could perversely make transitions back to −3 more likely.
It would be interesting to run some stats on how frequent the event of interest is (assuming we can specify it coherently), before and after the change. Based on my memory of the LW codebase, votes are stored transactionally, so it should be possible to compute before/after statistics at any time.
There might be discussions that transition between censored and uncensored.
This could be annoying, if we have to check all the comments upstream from the one we’re responding to, to make sure there isn’t a comment that might be downvoted to −4 in the future and make the effort a waste. I pointed out a bunch of potential downsides of this proposal to Eliezer but even I didn’t think of this one.
Yeah, most of the systems that do this sort of thing seem to hide the low-scoring comments but show high-voted children, avoiding that sort of problem.
That’s not necessarily what I mean by a “well-specified forecast”. Be careful not to confuse “precise” and “accurate”… For instance by “downvoted” do you mean “net votes below 0”, “having received any downvotes”, or “net votes at −4 or below”?
The last—by “downvoted threads” I meant “threads descended from a comment at −4 or less” (though there should also be an effect for threads whose most downvoted parent is at −3 or at −2). It’s a bit of a pity there isn’t a standard name for those.
OK, so to be clear: you’re predicting a roughly 50% decrease of the population “comments which are descendants of comments downvoted −4 or more”. This at “I’d be pretty surprised if it turned out otherwise”, which is my verbal equivalent of 70%. (For 80% it’s “I would be shocked” and for 90% it’s “I’d seriously question my worldview on the topic in question”, for 99% it’s “you should really not be messing around with anything remotely connected with that topic, you’re dangerous to yourself and others”.)
Here are some of the uncertainties.
We don’t know how large this population is currently. There is a subjective feeling that this number is significant and annoyingly so, but if it is small then it may be hard to detect an effect among the noise.
We don’t know many new comments arise from replies to Recent Comments, as opposed to two people going back and forth, or people explicitly looking for new stuff in a discussion they’re following, or people following a particular commenter.
We don’t know how fast low-quality comments get to −4 before they have accrued substantial discussion, or alternately the ratio between number of comments accumulated before getting to −4 and comments accumulated after.
Sadly, I’m about 65% sure that we’ll never get to have actual stats on the above, or on the prediction itself. :-/
This opens up a new aspect of downvoting, which I’ve just now tried out, and will describe in the interest of full disclosure: you can “swim up” the chain of comment parents until you find one that is at −3, and by downvoting that cause the entire downthread discussion to be effectively censored.
Swimming upthread is something I do quite often in the course of trying to understand what sparked a particular controversy—I’m often dismayed to see that these are tangents that had nothing to do with the original question being investigated and not a whole lot to do with rationality.
This comment by Wei Dai was the trigger for my looking to use this tactic (it felt like it belonged in a low-overall-value discussion of the kind I’d like to see less of), showing up at the top of Recent comments.
No less than eight levels above was this comment by wedrifid, sitting at −3, with a total of 38 children comments. Downvoting it (without the slightest qualm, given the first non-quoted words were a rhetorical “How dare you” that I strongly prefer not to see around here) did in fact cause Wei Dai’s comment to disappear from Recent. (Here’s the starting point of the whole subthread.)
So, that’s one (possibly unexpected) consequence of the new rule. Good? Bad? I haven’t formed an opinion yet.
(Some disclaimers: I have no particular antipathy toward either Wei Dai or wedrifid, nor did I allow myself to develop a particular attachment to either “side” in that particular controversy, given that the appearance of “sides” at all didn’t strike me as particularly productive. I’m aware that my commenting on this may negate the censorship consequences on this particular discussion, but it seemed to me that bringing this out in the open had greater expected value than just quietly censoring one subthread and retaining the power to do it again on other occasions.)
Not productive in the slightest. In fact I would happily downvote my own comment (despite reflectively endorsing it) just to hide the entire pointless load of tripe.
Yep, there’s some of my own comments I wish I could downvote for the same reason.
Really? This is a little surprising but only in a purely logistical sense. You don’t tend to be in situations where that can be effective. Voting on your comments is more extreme than with most so whenever your comments form part of an unproductive conversation they already tend to be downvoted way below the threshold where less prominent users who draw less attention may only have reached −2 or −3. For this reason I suspect the current implementation handles this for you with requiring your noble self-sacrifice.
(Pardon me if I’m just being too literal and you meant “would wish to be able to downvote”. The prominence and popularization factor is just what popped into my head following the “that would be redundant” thought.)
Me too. And that was even a feature of the system, once upon a time. But I’m not bitter, no.
This is likely the point of the rule: to discourage otherwise-high-quality comments that might inspire a wave of crappy ones.
The problem seemed to be that a crappy comment can sometimes inspire a wave of good comments.
Yes, and I like it (a lot). Especially now that the comments are hidden. When the comments were still visible it was more necessary to reply (so that errors aren’t accepted without correction). Now the (presumably, more often than not) bad replies don’t require high-quality refutation because they are invisible to those who don’t seek them out. The penalty to comment replies has very little downside.
I generally don’t read deeply nested comments (except when I load the Recent Comments page, which shows me everything without knowing how deep it is). I find they’re rarely worth it, especially when it’s just two people going hammer and tongs at each other. Even if one of the two people is me.
On reflection, I think I got a bit frustrated towards the end of my discussion with wedrifid, and lost some of my “cool”, but overall I would say that the discussion has been productive at least for me, given the inherent difficulties in human communications (and the (still mysterious-to-me) refusal on wedrifid’s part to answer many of my questions). While the information I got wasn’t what I set out to obtain at the start, nevertheless what I got is useful. For example I’ve learned that there are a number of forum behaviors that he considers undesirable and is willing to “punish” (which he apparently means in a somewhat technical sense):
rhetorical questions aimed at convincing the audience (and not hedging/indicating uncertainty)
inferring (“mind-reading”) negative motives or toxic beliefs in others and then stating them publicly in order to shame
quoting others out of context in order to making them look bad (this one was actually learned previously, but I’m including it here for completeness sake)
To be clear, naturally I don’t disagree that these behaviors are bad but think wedrifid tends err in the direction of judging too many people guilty. Regardless, at least in the future I can be more careful about my uses of rhetorical questions, inference of motives and beliefs, and quoting (e.g., do not use them unless I’m extremely confident that their actual and intended effects won’t be misunderstood) and hope to avoid some of the “punishments” that way.
It may be that in retrospect the amount of useful information exchanged seems really small compared to the amount of text exchanged. I think in part that’s due to hindsight bias and illusion of transparency that makes us think communication is easier than it really is, but almost certainly there are also things we could have done better, that would have made the exchange go more smoothly and efficiently. If anyone has any suggestions in that regard, I think (at least speaking for myself) they would be very much welcomed.
I wrote a few here, then stored them away: I want to hold off on proposing solutions. Let’s discuss the problem instead.
What started the whole thing was a question asked by komponisto, presumably intended to get at some interesting aspect of the object-level discussion, but which rapidly went meta (not “meta” in the sense of discussing LW, but “meta” in the sense of discussing the discussion).
Going meta isn’t the problem, per se. Losing track of the object-level inquiry altogether, while the meta discussion explodes into a 167-comment beast from a one-word comment? Yes, I think that qualifies.
The original comment which led to the explosion is upvoted at +8. (That’s one way the “technical” fix of censoring descendants of highly downvoted comments might be missing its target, not so much low-quality comments as polarizing, i.e. trollish, comments.)
The thread rapidly hits the limit of reply nesting (10 levels), so that only a portion of it can be seen simultaneously with the original exchange (komponisto’s question and nsheppard’s one-word reply). Your replies, for instance, appear only on page 2. It’s a safe assumptions that readers who are coming across your replies have lost the original context, unless they were involved in the controversy from the start.
On this first page, several of wedrifid’s comments—and only wedrifid’s—are highly downvoted. This further reinforces the hypothesis that the thread is polarizing and information cascades are taking place.
Reading your first intervention requires loading page 2 of the thread, and reading through to the bitter end requires one more page. This is way beyond what adds value to most LW readers except the most dedicated, and reminds me of the admonitions against thread mode.
Starting from your first intervention, the pattern becomes mostly a “ping-pong” one of you and wedrifid going back and forth. Only one other commenter is active on page 2 of the thread (TheOtherDave). A few others pipe up on page 3, but I suspect that by that point these are people being dragged into the conversation (from Recent Comments) because it has started to resemble a flamefest.
Between page 2 and 3, the discussion has drifted from “meta” in the sense of discussion-on-discussion, to “meta” in the sense of discussion-about-who-downvotes-what, i.e. into slime-dripping cancer territory.
Yes, Eliezer’s “cancer” pronouncement is downvoted and ironically buried in a thread that has several ancestor comments which are Eliezer’s and highly downvoted. It nevertheless captures a key truth: extended discussions of the game-theoretical aspects of the filtering features of LW do not have much potential to generate useful inferences from true beliefs. (Or stated more succinctly: most meta-discussion is neither epistemically nor instrumentally rational.)
I do think there is value in “meta” in the sense of discussion-about-discussion, however, and in particular in discussion of community norms, and I agree with your assessment of your own contributions.
That’s about as much as I can say without starting to make recommendations.
checking my understanding of this telegraphic little clause:
polarizing: those who invest the effort in following the argument will tend to pick a side they like best and vote accordingly?
information cascade: without realizing it, or, knowingly forgoing their own deep evaluation, people affiliate themselves with the winning side, piling on extra, uninformative, votes?
Yes on both counts.
Thanks. I don’t have much to add and look forward to seeing your suggestions.
This may be a stupid question, but… why do you want to avoid “punishment” (in the technical sense you reference here)?
My tentative understanding is that “labeling something and calling it undesirable” is only one form of “punishment” that fits wedrifid’s definition, and that if I ignore his milder punishments, he may escalate to more severe forms. (I started putting an example of what I think may be one of his more severe forms of punishment, but removed it in case he considers it to be either quoting out of context or mind-reading.)
My expectation is that in most cases when I’m punished I will consider myself innocent but also have some doubt (e.g., perhaps I am biased about my self-assessment or just missing something obvious). I may be tempted to defend myself or ask wedrid to explain his reasons, which may cause more discussions that others consider unproductive, as well as frustration to myself if I fail to resolve the doubt.
OK. Thanks for the explanation.
This could be fixed by making the hiding apply only to comments at most, say, three levels down from a downvoted comment.
Perhaps a way to make this work would be to automatically unhide downstream comments whose upvotes are greater than the sum of the downvotes of all its negative-karma parents? In that way, a good (ie. high-karma) discussion can’t be killed by a low-karma parent thread so easily.
That only works when you have large discussions under downvoted comments, which should become much less common now.
The interesting issues arise when a large discussion arises first, and an ancestor comment is downvoted later.
“It seemed like a good idea at the time.”
Care to translate that “should” into a well-specified forecast with attached probability? :)
We are assuming that the change will affect comments at −4 or lower, but it might not change the number of large discussions under comments at −3. There might be discussions that transition between censored and uncensored. The censorship might actually prevent comments at −4 from being downvoted even worse, and thus could perversely make transitions back to −3 more likely.
It would be interesting to run some stats on how frequent the event of interest is (assuming we can specify it coherently), before and after the change. Based on my memory of the LW codebase, votes are stored transactionally, so it should be possible to compute before/after statistics at any time.
This could be annoying, if we have to check all the comments upstream from the one we’re responding to, to make sure there isn’t a comment that might be downvoted to −4 in the future and make the effort a waste. I pointed out a bunch of potential downsides of this proposal to Eliezer but even I didn’t think of this one.
Yeah, most of the systems that do this sort of thing seem to hide the low-scoring comments but show high-voted children, avoiding that sort of problem.
Very approximately, I’d say I expect at least 50% less comments posted in downvoted threads, with probability 70%.
(though I don’t think that adding precise numbers adds much to the discussion)
That’s not necessarily what I mean by a “well-specified forecast”. Be careful not to confuse “precise” and “accurate”… For instance by “downvoted” do you mean “net votes below 0”, “having received any downvotes”, or “net votes at −4 or below”?
The last—by “downvoted threads” I meant “threads descended from a comment at −4 or less” (though there should also be an effect for threads whose most downvoted parent is at −3 or at −2). It’s a bit of a pity there isn’t a standard name for those.
OK, so to be clear: you’re predicting a roughly 50% decrease of the population “comments which are descendants of comments downvoted −4 or more”. This at “I’d be pretty surprised if it turned out otherwise”, which is my verbal equivalent of 70%. (For 80% it’s “I would be shocked” and for 90% it’s “I’d seriously question my worldview on the topic in question”, for 99% it’s “you should really not be messing around with anything remotely connected with that topic, you’re dangerous to yourself and others”.)
Here are some of the uncertainties.
We don’t know how large this population is currently. There is a subjective feeling that this number is significant and annoyingly so, but if it is small then it may be hard to detect an effect among the noise.
We don’t know many new comments arise from replies to Recent Comments, as opposed to two people going back and forth, or people explicitly looking for new stuff in a discussion they’re following, or people following a particular commenter.
We don’t know how fast low-quality comments get to −4 before they have accrued substantial discussion, or alternately the ratio between number of comments accumulated before getting to −4 and comments accumulated after.
Sadly, I’m about 65% sure that we’ll never get to have actual stats on the above, or on the prediction itself. :-/
Amusingly, this comment appears to be one such instance where a single downvote could remove a moderately large number of child comments.