META: Deletion policy
http://wiki.lesswrong.com/wiki/Deletion_policy
This is my attempt to codify the informal rules I’ve been working by.
I’ll leave this post up for a bit, but strongly suspect that it will have to be deleted not too long thereafter. I haven’t been particularly encouraged to try responding to comments, either. Nonetheless, if there’s something I missed, let me know.
- 2 Jan 2013 23:47 UTC; 7 points) 's comment on Open Thread, January 1-15, 2013 by (
Suggestion: I recommend sending people their deleted posts.
I find it annoying to spend the effort to type a post, only to have it disappear into a bit bucket. If you want it gone, that’s your prerogative, but I think it is a breach of etiquette for a forum to destroy information created by a forum user.
Now I assume you found the original post a breach of etiquette, so may feel that tit for tat is the right policy here. I’d consider an intentional breach of etiquette as an unnecessary escalation.
You can still see your own banned comments on your user page. This might be false for posts, I’m not sure.
Judging by Kodos96′s user page, the same is the case for posts, i.e., they are still visible after being “censored.”
This seems like a good thing to do as a courtesy in cases where it seems reasonable.
If it were an actual policy, you’d want to put some limits on it, i.e. “if the post is longer than X words and/or contains something that was clearly meant to be intelligent thought.”
I used to do that for a long time on a large-ish subreddit I mod. Eventually, it became too much of a burden, the workload footprint was too large. It may be a feasible policy to try and do that on LW, given the (hopefully) very low volume of deleted content.
This sounds a like something that could be handled by a script so as to be an utterly transparent process. In your role as a subreddit mod, it wouldn’t be so easy, but they have source access.
Good idea, that difference escaped my notice.
Post deletion is apparently rare and will remain so. If you type a post which clearly falls under the deletion policy, you deserve to have it disappeared without a trace. I’m sure that borderline cases would be discussed first and you will have a chance to edit your submission.
Concrete suggestions:
1. Bring the policy statements to the forefront; put the lengthy “background” discussion of “free speech” vs. “walled gardens” and the like in a brief FAQ or discussion section at the end. The first line of the policy statement should be the one beginning “Most of the burden of moderation …”
Reason: Most readers want to know what the policy is — so that should come first. Most of the people who want to argue about the theory of the policy are looking to have an enjoyably clever argument, which the “background” provides — so that should be there, but not in front.
2. Use formatting to emphasize the document’s structure. As it stands, there’s not enough visual structure for the eye to pick out the little numbers that indicate new points. More notably, the paragraph that separates the “more controversial” items looks structurally like it should be the explanation of the spam item.
3. Readers have heard of the common cases. Spam, harassment, and posting of personal information are things that lots of forums ban; LW is not unusual in this regard. In gist, if it’s against Reddit’s policy, it doesn’t need a lot of explanation.
4. Careful about spam and SEO. A major (possibly primary) to delete spam is that spam is clutter that gets in the way of people reading the forum. If someone posted a thousand posts that just contained “foo”, that would be spam and would be deleted; even though it has nothing to do with SEO. Commercial spam is bad because allowing it creates a monetary incentive for endless clutter production.
5. Harassment section is too specific. There are a lot of forms of harassment that I suspect you’d want to get rid of that don’t involve “following a particular user around and leaving insulting comments”.
Also:
The violence section is much better explained than in the previous post discussing it. Specifically, the unwelcoming effect of “hypothetical” violence proposals is a really good point.
Formatting added.
Yay! Thank you.
Either that or it isn’t specific enough and he could have come out and said what he really meant.
It was annoying to think I knew what you were referring to by reading this comment in isolation but it was depressing to be right.
I own the “everything-list” Google Group, which has no explicit moderation policy, although I do block spam and the occasional completely off-topic post from newbies who seemingly misunderstood the subject matter of the forum. It worked fine without controversy or anything particularly bad happening, at least in the first decade or so of its existence, when I still paid attention to it. I would prefer if Eliezer also adopted an informal but largely “hands off” policy here. But looking at Eliezer’s responses to recent arguments as well as past history, the disagreement seems to be due to some sort of unresolvable differences in priors/values/personality and not amenable to discussion. So I disagree but feel powerless to do anything about it.
Interesting. A couple hypotheses:
1) Admins overestimate the effect that certain policies have on behavior (they may underestimate random effects, or assign effects to the wrong policy); just like parents might overestimate the effect of parenting choices, or managers overestimate the impact of their decisions (“we did daily stand-up meetings, and the project was completed on time—the daily stand-up meetings must be the cause!”).
2) Eliezer is more concerned about the public image of LessWrong (both because of how it reflects on CFAR and SIAI, and on the kind of people it may attract) than you are (were?) about the everything-list.
For what it’s worth I’m fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.
Your testimony of a case where you still have quality discussion with very light moderation makes me slightly less in favor of heavy-handed moderation.
(I’m not sure that the moderation here is becoming “stronger” recently, as opposed to merely a bit more explicit)
3) Eliezer’s tolerance for “crazy” or stupid posts is so low that he’s way more pissed off by even a small number of them existing than other people are.
It seems to me the occasional crazy idea posted here wouldn’t reflect that badly on CFAR and SIAI, if they had a policy of “LW is an open forum and we’re not responsible for other people’s posts”, especially if the bad ideas are heavily voted down and argued against, with the authors often apologizing and withdrawing their own posts.
A crazy idea reflects badly on the ideology that spawned the crazy idea.
If that were true, LessWrong would have such an INCREDIBLY HUGE advantage over most every major religion. LessWrong hasn’t managed to raise armies and invade sovereign nations yet, after all.
Thinking in those terms, it makes me strongly suspect anyone turned away by a single bad post is engaging in some VERY motivated cognition, and probably would not have stayed long. (A high noise:signal ratio, on the other hand, would be genuinely damaging)
No one here felt distraught with religion? Not even a little? :)
No, the main reason is to avoid evaporative cooling and slippery slopes, a.k.a., the reasons free speech is such a sacred value.
Keep in mind Eliezer himself would be considered a crank by most “mainstream skeptics”.
Do you think there’s a big risk of evaporative cooling because Eliezer bans too many things? (assuming his current level of banning, not a much higher one) It’s true that the infamous Roko case seems to fit the bill, and Wei Dai’s concerns make me at least think it’s possible—but I would expect a greater risk in the opposite direction, of the quality of discussion being watered down by floods of comments on stupid topics, meaning that people who don’t have time to sort through all the clutter may end up giving up participating in most discussions.
Having spent a few years chatting on karma-less, completely unmoderated fora (spam would be deleted, but nothing else), I can say that this does not seem to occur. The pattern seems to be that when someone says something the forum considers stupid, this is remarked upon, and then they either attempt to improve to be more in line with the general opinion, or leave. People are not really gluttons for punishment—if a community does not welcome them, they (usually) will not continue participating in it—and the ratio of new users to old users is typically very low, so norms are maintained in the medium term (barring major news coverage or something).
Although I guess without the deletion policy discussion may drift further away from rationality, so if you think most of that would be boring or mindkilling, it may be of value.
Eliezer has pretty blatantly stated that the reasoning was #2
There is a large difference between running a private list and a more accessible forum associated with an organization (the logos on top).
The section on “information hazards” has an actual live link to TVTropes. Irony much?
Heh! Irony emphasized.
This started me on a trope-walk, though I was eventually able to pull myself back to what I was doing. :P
Irony indeed.
I agree with this policy.
At the very least, this needs a citation or two, since the following sources cast doubt on the story as presented:
WebMD’s account
CNN’s account
Snopes’ account
And CSI’s account, which includes the following:
And then goes on to argue that the large number of cases was due to mass hysteria.
Please link to the wiki page somewhere so that it’s not an orphan. Official policies need to be readily accessible. Also consider making it visible on the main site somewhere, if at all possible.
Linked to the new page from Moderation tools and policies, linked to ‘Moderation tools and policies’ from the wiki sidebar (section ‘Community’).
Thank you.
This can be carried out by non-admins (at least the first part).
It usually doesn’t happen.
Is I read it, the policy does not address the basilisk and basilisk type issues, which, while I don’t think should be moderated, are. “Information Hazards” specifically says “not mental health reasons.”
A true basilisk is not a mental health risk, or at least not only such. Whether one such has been found is a separate question (I lean toward no).
IIRC, allegedly there were a few people with OCD having nightmares after reading that post by Roko.
My point was that it doesn’t cause mental health problems, not that it can’t trigger them. Perhaps that’s a bad way to put it. If it does, there’s something beyond the information hazard going on, either an existing problem being triggered, or a multiple hazard. As I understand it, a basilisk is hazardous because you know the argument, without it needing to corrupt your reasoning abilities. Roko’s is alleged to be hazardous even to a rational agent. (I don’t think it is, and I think censoring it prevents an interesting debate about why. I don’t plan to say any more, given the existing censorship policies. If this is already too much, please let me know and I will edit accordingly.)
Quantum roulette is a possible candidate.
Well, the “LW basilisk” just turned out to be a knife sharp enough to cut yourself with. And sometimes you need sharp knives.
It does, in as much as it includes:
This particular entry makes all the others more or less redundant. This is perhaps better than only having the “information Hazard” clause. Because Eliezer deleting something based on the “Eliezer says so” is at least coherent and unambiguous. It doesn’t matter whether a post by Roko is actually dangerous. The says so clause can still cover it and we can just roll our eyes and tolerate Eliezer’s quirks.
Well his attempt here is to lay out a bit more than “Because Eliezer says so” as a reason.
I suspect a good deal of angst around the topic has been from people seeing the issues in online communities as symbolic of real-world issues—opposing policies not because they are bad for an online community, but because they would be bad if applied by a real-world government to a real-world nation; real-world governments come to mind because we have reasons to care more strongly about them, and we hear much more about them. But there are important differences! The biggest is that you can easily leave an online community any time you’re not happy about it. I don’t think an online community is more similar to a nation than it is to a bridge club, or a company, or a supermarket, or the people making an encyclopedia.
I don’t think the concern about the symbolism of censorship is completely wrong; it’s quite possible that China could argue that real-world censorship is important for the same reasons it is in online communities!
Somewhat off-topic, but this makes me think that maybe school should teach a bit about “online history”—the history of Usenet and Wikipedia for example.
This seems like a good deletion policy, but doesn’t cover all the actual deletions that have been threatened. Edit: specifically, the policy of allowing certain parties to ban direct refutations of their arguments (edit2: from particular users).
At the end, the policy says that the policy does not force the mods to delete anything. Perhaps it should in the same breath also say that it does not prevent them from deleting anything. The judgement of the mods and admins is final and above the policy; the purpose of the policy is to inform them and the readership of the general principles that will be applied.
I was asked to post the following by an anonymous member.
Regardless of whether the authors “accept” this moral burden, to “indicate” that they do would be unwise. If you can get in serious trouble for saying something the public statements of smart people are a lot less evidence for what they actually think on that topic.
I agree with this policy.
Is the Pokemon story actually true? Casual googling suggests probably not, but I haven’t investigated carefully enough to have a very strong opinion. Specifically, I didn’t find corroboration of the claim that most of the children who went to hospital had seen news reports rather than the original programme.
This just says that some of the children were stricken later—if I had to guess I’d say that the vast majority was done during the actual show.
So noted. Will try to remember to edit at some point.
“[...] ‘Pikachu,’ a rat-like creature [...]”
That looks quite wall-of-text-y. It could be made more concise. Also, “We live in a society”—“we” who? Not all LW users are from the US, or even from the Anglosphere, or even from the Western world. Whereas probably each LWer comes from some society with some stupid laws, that sentence still sounds kind of off, to me.
It’s nice to have written ground rules, even if they are basically common sense.
I think this seems like a basically fine policy.
I will also say that my own experience being a moderator is firmly in agreement with http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/ , and thus in opposition to those who would rather see a totally hands-off approach to moderation.
Why would this post need to be deleted?
Because people can reply to it and some replies are disagreements.
So, there might be comments on LW of people disagreeing with Eliezer’s policy. The horror.
Nah, he likely means that the comments might become so full of censorable examples that the entire branch of discussion would get tainted. I hope not.
(I’m moderately against the tightening of censorship policy, BTW, but I understand Eliezer’s reasoning, and I’m fine with it.)
I agree with this policy. It sounds totally benign and ordinary.
If you mean comment karma, consider that in the case where people appreciate your responses, but strongly disagree with their content, they will downvote you instinctively, as soon as they would furrow their brows: it’s an immediately available, low effort way to scratch the itch of dissenting feelings. Since downvotes seem to give you cold-stabbies, but don’t make you reevaluate your positions, instinct-downvoting is doubly ineffective, but still the default. We’ve now learned that saying “This isn’t a poll. You have to correct me or I won’t stop being wrong”, isn’t enough to break that habit.
Indeed, and we (the LW community) have to learn to tell the difference between deliberate trolls and misguided rationalists for our moderation to be effective. In the same way that replying to a troll is a mistake in that it feeds their attention craving, not replying to a wrong non-troll can be a mistake in that they don’t notice their error. Maybe a lower downvote limit (4xkarma) would help break aforementioned habit.
Then there’s the possibility that someone enjoys intentionally pretending to be clueless as a means of trolling and further enjoys that it disrupts people’s instinct to provide guidance to misguided rationalists.
That would be incredibly difficult on the moderators. Thankfully, being smart enough to think of that and dumb enough to be a troll isn’t a very plausible interval for human intellect.
Unfortunately, sometimes gifted people are trolls.
I would repeat the thing about not binding at the top.
Well...
I’m upset by this.
Not sure why, exactly, but yeah, definitely upset by this. Just felt like sharing.
If you could figure that out, that would be helpful.
Intuitive gut reaction. If I had an argument to make I would have said so. Any case I make would have been formed from backtracking from my initial feeling, and I’m probably not the only commenter here arguing based on an “ick” or “yay” gut reaction to the idea of censorship. I thought it was worth pointing out.
As I see it, this is sort of like that quote on truth that goes something like “You may as well acknowledge the truth—you’re already dealing with it.”
Censorship was already happening on LessWrong. Now that Eliezer is making an effort to share some of his decision-making process, there is less to fear in a way since you get to have that additional info for guessing what he’s likely to do.
Fear of the unknown can feel a lot worse than fear of the known.
I think you mean the Litany of Gendlin, and I believe some of these rules are being newly implemented, but I could be wrong about that.
He can run his site anyway he wants, and most of the ideas here are reasonable precautions given his values. That doesn’t change the fact that I intuitively don’t like them when I read them, and that gut reaction (or possibly it’s opposite) is probably shared with others here who probably allow it to color their arguments one way or the other. Just something to keep in mind, is all.
Oh thank you. I kept wondering what that quote was.
Oh, that is a good point.
I was trying to make you feel better.
Status quo bias: I’m reasonably sure that if this policy had been in place from Day 1, very few people would have given it a second thought.
I remember that one way to combat status quo bias is re-framing. I am about to read the new deletion policy for the first time, but I am going to consciously frame it as “this is a deletion policy already in place for a site I am considering joining” rather than “this is a change to a deletion policy for a site I have already joined.”
[Goes to read the policy]
In that frame, I would like the deletion policy and it wouldn’t otherwise discourage me from joining the site. I would appreciate that the moderators would be taking moderation seriously, as opposed to some other sites I know of. In particular, the example about academic conferences is a great illustration of the argument.
My only concern is about the broad language used under the sections “Prolific trolls” and “Trollfeeding.” The policy refers to commentators who
as well as
Can the policy be amended to quantify those qualitative standards? Or if for practical purposes we can’t quantify those standards, then include an a sentence to emphasize that interpretation of the standard is at the moderator’s individual discretion.
This has always been the LW mission, and it’s true that some threads are not at all on subject. And then it makes sense to delete them if their net value is even slightly negative, perhaps even if they are merely shown to take too much attention away from rationality topics. Although, I would appreciate it if the first tool used was a request or warning by a moderator to stop discussing something, rather than just deleting it.
People do want to discuss off-topic things, and I at least would like to do it with fellow LW users. (And I prefer forums or mailing lists to IRC.) Perhaps there is enough interest now to establish an offsite, unaffiliated, lightly moderated, Offtopic Discussion forum for LW users. Perhaps such a splitting off would also benefit LW by keeping it more focused on rationality. What do people think?
I see no definition for the word troll. It seems like a thing that should be obvious, but I’ve seen people using the word “troll” to describe people who are simply ignorant. I think I’m also picking up on a trend where, if a comment is downvoted, it is considered trolling regardless of the fact that it was simply an unpopular comment by an otherwise likable user. LessWrong seems to use a broader definition of the word “trolling” than I am used to. If you guys have your own twist on “trolling” it would be good to add LessWrong’s definition to the wiki.
I don’t think a formal definition of the word “troll” would be useful; the term is used somewhat informally to the general blob of “problematic users”—trolls, idiots, cranks, aggressive and self-centered users, people who won’t shut up about their pet topic, etc. - the borders are somewhat fuzzy, and any attempt to try to formalize them is likely to be too broad or too narrow. Would you be able to properly formalize the kind of behavior you don’t want on a website you run, without being too broad or too narrow?
“Troll” is a bit like an unambiguous example of the class of behaviors to be discouraged, but if the policies hit a broader target and also discourage non-trolling obnoxious cranks and idiots, that’s a feature, not a bug.
Incidentally, I agree that using ’trolling” to describe any downvoted comments (like the “troll toll”) is somewhat unfortunate, meany downvoted comments are from users who sincerely want to convince everybody that if they would stop being blinded by politically correct groupthink they would recognize that lizard-men are controlling the government. But then, “troll toll” has a nice ring to it.
I can see how this would be more useful from the perspective of the person doing the banning, but I don’t see why it would be useful from the perspective of the person who is attempting to avoid being banned. Flexible for one purpose, too vague for the other.
Somebody has probably already done so. Not perfectly, of course. But they’ve probably already done so. There might even be a description of undesired behavior in an open source context, either as part of a free legal terms of service agreement, or as part of a piece of open source software. It is quite possible that a good free description has already been written and just needs editing. It’s also possible to do better than be flexible/vague and provide a list of behaviors (such as the one you created above) that briefly describes the main concerns, without it being perfect, and simply aim to make an improvement on flexible/vague.
The problem is that people with idiotic ideas do not know they are being idiotic, and I think that although some cranks do know that they’re wrong and are content trying to scam people, other cranks are just as clueless as their customers, and have no idea that what they’re selling is a ripoff. For instance: I’m not religious, but do I consider a priest a crank? No. I consider a priest somebody who genuinely believes the ideas they’re selling, not somebody intentionally deceiving people in order to collect donation money. For this reason, using the words “cranks” and “idiots” is probably not likely to work—something like “If you don’t bother to support your points with rational arguments and don’t update and keep bothering us, we’ll boot you.” would be more likely to help them realize it’s targeted at them.
I agree with most of what you say here, there are probably some places where “troll” could have been replaced by something more precise in a way that would be more useful.
I agree that it’s important to help “borderline problematic users” to mend their ways, but I don’t think the deletion policy is the best place to do that; a precise and detailed deletion policy risks increasing the amount of nitpicking over whether such-and-such moderator action was really justified by the rules (even if those “rules” are actually just said moderator trying to explain by what principles he acts, not a binding legal document!), or nitpicking about whether such-and-such hypothetical case should be banned or not; neither of those two conversations are things I’m particularly interested in reading.
So I think it may be more efficient to help good faith users by improving welcome pages, or talking to them in welcome threads, etc.
The not wanting to nitpick is a good point. I don’t know whether a more specific definition of troll would necessarily result in more nitpicking. If readers take “troll” by the stereotypical definition (like what ArisKatsaris provided over here and then somebody gets deemed a troll and censored for saying idiotic things without an intent to annoy (or for some other reason not typically associated with the stereotypical troll), then this could spark controversy, and you still get the nitpicking conversation. Verbiage like “anybody who trolls, but not limited to that” or “we think trolls are this that and the other, but not limited to that” may make any nitpicking conversations rather short. “We said it wasn’t limited to that. End of conversation.”
Trolls are generally people who post with the hope of invoking a negative reaction (e.g. negative responses, flames, downvotes, censorship, bans). Identifying trolls is often a harder job than defining them.
So does asking for criticism of your argument count as trolling?
There’s a difference between asking for criticism of a post/argument that you nonetheless hope to be good, and intentionally making a bad argument so that you will be criticized.
I think the difference I’m talking about is well understood.
Basically, would Socrates be considered a troll?
Thanks. That looks like the stereotypical definition of troll to me. Is it that you’re saying LessWrong does not use the word “troll” differently, and the ambiguity is just due to people having a hard time figuring out who is a troll?
‘LessWrong’ is composed of many people. I’m sure that some use it the way I use it, and some have different definitions. I don’t think that LessWrong differs in this respect from any other forum or community.
I’m really disappointed in EY—the wiki page is incredibly careless of the safety of Brittany Fleegelburger, purple-eyed people, and congohelium producers. Large amounts of common sense indeed!
(The parent has an intended meaning over and above the feeble attempt at humor. It lies in the fact that I could have posted about a genuine concern—if I had one.)
Consider adding something like “in return for donating $X to Y you will get a detailed reason for why your post was deleted.”
Doesn’t this create a very poor set of incentives?
Not if X is small or Y is unaffiliated with the censors.
A (short) reason should be common courtesy except for spam and egregious trolls.
EDIT: Assuming this sort of thing is low enough volume not to substantially add to the work the deleter does in deleting posts.