Not a direct response, but I want to take some point in this discussion (I think I said this to Zack in-person the other day) to say that, while some people are arguing that things should as a rule be collaborative and not offensive (e.g. to varying extents Gordon and Rafael), this is not the position that the LW mods are arguing for. We’re arguing that authors on LessWrong should be able to moderate their posts with different norms/standards from one another, and that there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
I could see it being confusing because sometimes an author like Gordon is moderating you, and sometimes a site-mod like Habryka is moderating you, but they are using different standards, and the LW-mods are not typically endorsing the author standards as our own. I even generally agree with many of the counterarguments that e.g. Zack makes against those norms being the best ones. Some of my favorite comments on this site are offensive (where ‘offensive’ is referring to Wei’s meaning of ‘lowering someone’s social status’).
We’re arguing that authors on LessWrong should be able to moderate their posts with different norms/standards from one another, and that there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
What is currently the acceptable range of moderation norms/standards (according to the LW mod team)? For example if someone blatantly deletes/bans their most effective critics, is that acceptable? What if they instead subtly discourage critics (while being overtly neutral/welcoming) by selectively enforcing rules more stringently against their critics? What if they simply ban all “offensive” content, which as a side effect discourages critics (since as I mentioned earlier, criticism almostly inescapably implies offense)?
And what does “retribution or counter-punishment” mean? If I see an author doing one of the above, and question or criticize that in the comments or elsewhere, is that considered “retribution or counter-punishment” given that my comment/post is also inescapably offensive (status-lowering) toward the author?
What is currently the acceptable range of moderation norms/standards (according to the LW mod team)?
I think the first answer is “Mostly people aren’t using this feature, and the few times people have used it it has not felt to us like abuse or strongly needing to be pushed back on” so I don’t have any examples to point to.
But I’ll quickly generate thoughts on each of the hypothetical scenarios you briefly gestured to.
For example if someone blatantly deletes/bans their most effective critics, is that acceptable?
It’d depend on how things played out. If Andrew writes a blogpost with a big new theory of rationality, and then Bob and Charlie and Dave all write decisive critiques and then their comments are deleted and banned from commenting on his posts, I think it’s quite plausible that they’ll write a new post together with the copy-paste of their comments and it’ll get more karma than the original. This seems like a good-enough outcome to me. On the other hand if Andrew only gets criticism from Bob, and then deletes Bob’s comments and bans him from commenting on his posts, and then Bob leaves the site, I would take more active action, such as perhaps removing Andrew’s ability to ban people, and reaching out to Bob to thank him for his comments and encourage him to return.
What if they instead subtly discourage critics (while being overtly neutral/welcoming) by selectively enforcing rules more stringently against their critics?
That sounds like there’d be some increased friction on criticism. Hopefully we’d try to notice it and counteract it, or hopefully the commenters who were having annoying experience being moderated would notice and move to shortform or posts and do their criticism from there. But plausibly there’d just be some persistent additional annoyances or costs that certain users would have to pay.
What if they simply ban all “offensive” content, which as a side effect discourages critics (since as I mentioned earlier, criticism almostly inescapably implies offense)?
I mean, again, probably this would just be very incongruous with LessWrong and it wouldn’t really work and they’d have to ban like 30+ users because everyone wouldn’t get this and would keep doing things the author didn’t like, and the author wouldn’t eventually leave if they needed that sort of environment, or we’d step in after like 5 and say “this is kind of crazy, you have to stop doing this, it isn’t going to work out, we’re removing your ability to ban users”. So many of the good comments on LessWrong lower their interlocutor’s status in some way.
And what does “retribution or counter-punishment” mean?
It means actions that predictably make the author feel that them using the ban feature in general is illegitimate or that using it will cause them to have their reputation attacked, regardless of reason or context, in response to them using the ban feature.
If I see an author doing one of the above, and question or criticize that in the comments or elsewhere, is that considered “retribution or counter-punishment” given that my comment/post is also inescapably offensive (status-lowering) toward the author?
Many many writers on LessWrong are capable of critiquing a single instance of a ban while taking care to communicate that they are not pushing back on all instances of banning, and can also credibly offer support in other instances that are more reasonable.
Generally it is harder to signal this when you are complaining about your own banning. For in-person contexts (e.g. events) I generally spend effort to ensure that people do not feel any cost for not inviting me to events or spaces, and not expect that I will complain loudly or cause them to lose social status for it, and a similar (but not identical) heuristic applies here. If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
There is still good form and bad form to imposing costs on people for moderating their spaces, and costs imposed on people for moderating their spaces (based on disagreement or even trying to fix biases in the moderation) are the most common reason for good spaces not existing; moderation is unpleasant work, lots of people feel entitled to make strong social bids on you for your time and to threaten to attack your social standing, and I’ve seen many spaces degrade due to unwillingness to moderate. You should of course think about this if you are considering reliably complaining loudly every time anyone uses a ban feature on people.
Added: I hope you get a sense from reading this that your questions don’t have simple answers, but that the scenarios you describe require active steering depending on the dynamics at play. I am somewhat wary you will keep asking me a lot of short questions that, due to your inexperience moderating spaces, you will assume have simple answers, and I will have to do lots of work generating all the contexts to show how things play out, else Said or someone allied with him against him being moderated on LW will claim I am unable to answer the most basic of questions and this shows me to be either ignorant or incompetent. And, man, this is a lot of moderation discussion.
If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
If I was in this circumstance, I would be pretty worried about my own biases, and ask neutral or potentially less biased parties whether there might be more charitable and reasonable hypotheses why that person doesn’t want to interact with me. If there isn’t though, why shouldn’t I complain and e.g. make it common knowledge that my valuable criticism is being suppressed? (Obviously I would also take into consideration social/political realities, not make enemies I can’t afford to make, etc.)
I’ve seen many spaces degrade due to unwillingness to moderate
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost (to potential critics and people like me who really value criticism) of not knowing when their critiques might be unilaterally deleted or banned by post authors. I mean aside from the “benefit” of attracting/retaining the authors who demand such unilateral powers.
And, man, this is a lot of moderation discussion.
Aside from the above “benefit”, It seems like you’re currently getting the worst of both worlds: lack of significant usage and therefore potential positive effects, and lots of controversy when it is occasionally used. If you really thought this was an important feature for the long term health of the community, wouldn’t you do something to make it more popular? (Or have done it in the past 7 years since the feature came out?) But instead you (the mod team) seem content that few people use it, only coming out to defend the feature when people explicitly object to it. This only seems to make sense if the main motivation is again to attract/retain certain authors.
I am somewhat wary you will keep asking me a lot of short questions that, due to your inexperience moderating spaces, you will assume have simple answers, and I will have to do lots of work generating all the contexts to show how things play out
It seems like if you actually wanted or expected many people to use this feature, you would have written some guidelines on what people can and can’t do, or under what circumstances their moderation actions might be reversed by the site moderators. I don’t think I was expecting the answers to my questions to necessarily be simple, but rather that the answers already exist somewhere, at least in the form of general guidelines that might need to be interpreted to answer my specific questions.
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost
I mean, mostly we’ve decided to give the people who complain about moderation a shot, and compensate by spending much much more moderation effort from the moderators. My guess is this has cost a large amount of counterfactual quality of the site, many contributors, etc.
In-general, I find argument of the form “so to the extend that LW hasn’t been destroyed, X can’t be that valuable” pretty weak. It’s very hard to assess the counterfactual, and “if not X, LessWrong would have been completely destroyed” is rarely the case for almost any X that is in dispute.
My guess is LW would be a lot better if more people felt comfortable moderating things, and in the present world, there are a lot of costs born by the site admins that wouldn’t be necessary otherwise.
I mean, mostly we’ve decided to give the people who complain about moderation a shot
What do you mean by this? Until I read this sentence, I saw you as giving the people who demand unilateral moderation powers a shot, and denying the requests of people like me to reduce such powers.
My not very confident guess at this point is that if it weren’t for people like me, you would have pushed harder for people to moderate their own spaces more, perhaps by trying to publicly encourage this? And why did you decide to go against your own judgment on it, given that “people who complain about moderation” have no particular powers, except the power of persuasion (we’re not even threatening to leave the site!), and it seems like you were never persuaded?
My guess is LW would be a lot better if more people felt comfortable moderating things, and in the present world, there are a lot of costs born by the site admins that wouldn’t be necessary otherwise.
This seems implausible to me given my understanding of human nature (most people really hate to see/hear criticism) and history (few people can resist the temptation to shut down their critics when given the power and social license or cover to do so). If you want a taste of this, try asking DeepSeek some questions about the CCP.
But presumably you also know this (at least abstractly, but perhaps not as viscerally as I do, coming from a Chinese background, where even before the CCP, criticism in many situations was culturally/socially impossible), so I’m confused and curious why you believe what you do.
My guess is that you see a constant stream of bad comments, and wish you could outsource the burden of filtering them to post authors (or combine efforts to do more filtering). But as an occasional post author, my experience is that I’m not a reliable judge of what counts as a “bad comment”, e.g., I’m liable to view a critique as a low quality comment, only to change my mind later after seeing it upvoted and trying harder to understand/appreciate its point. Given this, I’m much more inclined to leave the moderation to the karma system, which seems to work well enough in leaving bad comments at low karma/visibility by not upvoting them, and even when it’s occasionally wrong, still provides a useful signal to me that many people share the same misunderstanding and it’s worth my time to try to correct (or maybe by engaging with it I find out that I still misjudged it).
But if you don’t think it works well enough… hmm I recall writing a post about moderation tech proposals in 2016 and maybe there has been newer ideas since then?
I mean, I have written like 50,000+ words about this at this point in various comment threads. About why I care about archipelagos, and why I think it’s hard and bad to try to have centralized control about culture, about how much people hate being in places with ambiguous norms, and many other things. I don’t fault you for not reading them all, but I have done a huge amount of exposition.
And why did you decide to go against your own judgment on it, given that “people who complain about moderation” have no particular powers, except the power of persuasion (we’re not even threatening to leave the site!), and it seems like you were never persuaded?
Because the only choice at this point would be to ban them, since they appear to be willing to take any remaining channel or any remaining opportunity to heap approximately as much scorn and snark and social punishment on anyone daring to do moderation they disagree with, and I value things like readthesequences.com and many other contributions from the relevant people enough that that seemed really costly and sad.
My guess is I will now do this, as it seems like the site doesn’t really have any other choice, and I am tired and have better things to do, but I think I was justified and right to be hesitant to do this for a while (though yes, in ex-post it would have obviously been better to just do that 5 years ago).
It seems to me there are plenty of options aside from centralized control and giving authors unilateral powers, and last I remember (i.e., at the end of this post) the mod team seems to be pivoting to other possibilities, some of which I would find much more reasonable/acceptable. I’m confused why you’re now so focused again on the model of authors-as-unilateral-moderators. Where have you explained this?
I have filled my interest in answering questions on this, so I’ll bow out and wish you good luck. Happy to chat some other time.
I don’t think we ever “pivoted to other possibilities” (Ray often makes posts with moderation things he is thinking about, and the post doesn’t say anything about pivoting). Digging up the exact comments on why ultimately there needs to be at least some authority vested in authors as moderators seems like it would take a while.
I meant pivot in the sense of “this doesn’t seem to be working well, we should seriously consider other possibilities” not “we’re definitely switching to a new moderation model”, but I now get that you disagree with Ray even about this.
Your comment under Ray’s post wrote:
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
This made me think you were also no longer very focused on the authors-as-unilateral-moderators model and was thinking more about subreddit-like models that Ray mentioned in his post.
BTW I’ve been thinking for a while that LW needs a better search, as I’ve also often been in the position being unable to find some comment I’ve written in the past.
Instead of one-on-one chats (or in addition to them), I think you should collect/organize your thoughts in a post or sequence, for a number of reasons including that you seem visibly frustrated that after having written 50k+ words on the topic, people like me still don’t know your reasons for preferring your solution.
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
Huh, ironically I now consider the AI Alignment Forum a pretty big mistake in how it’s structured (for reasons mostly orthogonal but not unrelated to this).
BTW I’ve been thinking for a while that LW needs a better search, as I’ve also often been in the position being unable to find some comment I’ve written in the past.
Agree.
Instead of one-on-one chats (or in addition to them), I think you should collect/organize your thoughts in a post or sequence, for a number of reasons including that you seem visibly frustrated that after having written 50k+ words on the topic, people like me still don’t know your reasons for preferring your solution.
I think I have elaborated non-trivially on my reasons in this thread, so I don’t really think it’s an issue of people not finding it.
I do still agree it would be good to do more sequences-like writing on it, though like, we are already speaking in the context of Ray having done that a bunch (referencing things like the Archipelago vision), and writing top-level content takes a lot of time and effort.
I think I have elaborated non-trivially on my reasons in this thread, so I don’t really think it’s an issue of people not finding it.
It’s largely an issue of lack of organization and conciseness (50k+ words is a minus, not a plus in my view), but also clearly an issue of “not finding it”, given that you couldn’t find an important comment of your own, one that (judging from your description of it) contains a core argument needed to understand your current insistence on authors-as-unilateral-moderators.
If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
If I was in this circumstance, I would be pretty worried about my own biases, and ask neutral or potentially less biased parties whether there might be more charitable and reasonable hypotheses why that person doesn’t want to interact with me. If there isn’t though, why shouldn’t I complain and e.g. make it common knowledge that my valuable criticism is being suppressed? (Obviously I would also take into consideration social/political realities, not make enemies I can’t afford to make, etc.)
I’m having a hard time seeing how this reply is hooking up to what I wrote. I didn’t say critics, I spoke much more generally. If someone wants to keep their distance from you because you have bad body odor, or because they think your job is unethical, and you either don’t know this or disagree, it’s pretty bad social form to go around loudly complaining every time they keep their distance from you. It makes it more socially costly for them to act in accordance with their preferences and makes a bunch of unnecessary social conflict. I’m pretty sure this is obvious and this doesn’t change if you’ve suddenly developed a ‘criticism’ of them.
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost (to potential critics and people like me who really value criticism) of not knowing when their critiques might be unilaterally deleted or banned by post authors. I mean aside from the “benefit” of attracting/retaining the authors who demand such unilateral powers.
I mean, I think it pretty plausible that LW would be doing even better than it is with more people doing more gardening and making more moderated spaces within it, archipelago-style.
I read you questioning my honesty and motivations a bunch (e.g. you have a few times mentioning that I probably only care about this because of status reasons I cannot mention or to attract certain authors and that my behavior is not consistent with believing in users moderating their own posts being a good idea) which are of course fine hypotheses for you to consider. After spending probably over 40 hours writing this month explaining why I think authors moderating their posts is a good idea and making some defense of myself and my reasoning, I think I’ve done my duty in showing up to engage with this semi-prosecution for the time being, and will let ppl come to their own conclusions. (Perhaps I will write up a summary of the discussion at some point.)
and there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
Great, so all you need to do is make a rule specifying what speech constitutes “retribution” or “counterpunishment” that you want to censor on those grounds.
Maybe the rule could be something like, “No complaining about being banned by a specific user (but commenting on your own shortform strictly about the substance of a post that you’ve been banned from does not itself constitute complaining about the ban)” or “No arguing against the existence on the user ban feature except in designated moderation threads (which get algorithmically deprioritized in the new Feed).”
It’s your website! You have all the hard power! You can use the hard power to make the rules you want, and then the users of the website have a clear choice to either obey the rules or be banned from the site. Fine.
What I find hard to understand is why the mod team seems to think it’s good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced. Telling people to “stop optimizing in a fairly deep way” is not a rule because of how vague and potentially all-encompassing it is. Telling people to avoid “mak[ing] people feel judged or not” is not a rule because I don’t have control over how other people feel.
“Don’t tell people ‘I’m judging you about X’” is a rule. I can do that.
What I can’t do is convincingly pretend to be a person with a completely different personality such that people who are smart about subtext can’t even guess from subtle details of my writing style that I might privately be judging them.
I mean, maybe I could if I tried very hard? But I have too much self-respect to try. If the mod team wants to force temperamentally judgemental people to convincingly pretend to be non-judgemental, that seems really crazy.
I know, the mods didn’t say “We want temperamentally judgemental people to convincingly pretend to have a completely different personality” in those words; rather, Habryka said he wanted to “avoid a passive aggressive culture tak[ing] hold”. I just don’t see what the difference is supposed to be in practice.
Mm, I think sometimes I’d rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
A key question is: Are authors comfortable using the mod tools the site gives them to garden their posts?
You can write lots of judgmental comments criticizing an author’s posts, and then they can ban you from their comments because they find engaging with you to be exhausting, and then you can make a shortform where you and your friends call them a coward, and then they stop using the mod tools (and other authors do too) out of a fear that using the mod tools will result in a group of people getting together to bully and call them names in front of the author’s peers. That’s a situation where authors become uncomfortable using their mod tools. But I don’t know precisely what comment was wrong and what was wrong with it such that had it not happened the outcome would counterfactually not have obtained i.e. that you wouldn’t have found some other way to make the author uncomfortable using his mod tools (though we could probably all agree on some schelling lines).
Also I am hesitant to fully outlaw behavior that might sometimes be appropriate. Perhaps there are some situations where it’s appropriate to criticize someone on your shortform after they banned you. Or perhaps sometimes you should call someone a coward for not engaging with your criticism.
Overall I believe sometimes I will have to look at the outcome and see whether the gain in this situation was worth the cost, and directly give positive/negative feedback based on that.
Related to other things you wrote, FWIW I think you have a personality that many people would find uncomfortable interacting with a lot. In-person I regularly read you as being deeply pained and barely able to contain strongly emotional and hostile outbursts. I think just trying to ‘follow the rules’ might not succeed at making everyone feel comfortable interacting with you, even via text, if they feel a deep hostility from you to them that is struggling to contain itself with rules like “no explicit insults”, and sometimes the right choice for them will just be to not engage with you directly. So I think it is a hypothesis worth engaging with that you should work to change your personality somewhat.
To be clear I think (as Said has said) that it is worth people learning to be able to make space to engage with people like you who they find uncomfortable, because you raise many good ideas and points (and engaging with you is something I relatively happily do, and this is a way I have grown stronger relative to myself of 10 years ago), and I hope you find more success as I respect many of your contributions, but I think a great many people who have good points to contribute don’t have as much capacity as me to do this, and you will sometimes have to take some responsibility for navigating this.
If the popular kids in the cool kids’ club don’t like Goldstein and your only goal is to make sure that the popular kids feel comfortable, then clearly your optimal policy is to kick Goldstein out of the club. But if you have some other goal that you’re trying to pursue with the club that the popular kids and Goldstein both have a stake in, then I think you do have to try to evaluate whether Goldstein “did anything wrong”, rather than just checking that everyone feels comfortable. Just ensuring that everyone feels comfortable at all costs, without regard to the reasons why people feel uncomfortable or any notion that some reasons aren’t legitimate grounds for intervention, amounts to relinquishing all control to anyone who feels uncomfortable when someone else doesn’t behave exactly how they want.
Something I appreciate about the existing user ban functionality is that it is a rule-based mechanism. I have been persuaded by Achmiz and Dai’s arguments that it’s bad for our collective understanding that user bans prevent criticism, but at least it’s a procedurally “fair” kind of badness that I can tolerate, not completely arbitrary tyranny. The impartiality really helps. Do you really want to throw away that scrap of legitimacy in the name of optimizing outcomes even harder? Why?
I think just trying to ‘follow the rules’ might not succeed at making everyone feel comfortable interacting with you
But I’m not trying to make everyone feel comfortable interacting with me. I’m trying to achieve shared maps that reflect the territory.
A big part of the reason some of my recent comments in this thread appeal to an inability or justified disinclination to convincingly pretend to not be judgmental is because your boss seems to disregard with prejudice Achmiz’s denials that his comments are “intended to make people feel judged”. In response to that, I’m “biting the bullet”: saying, okay, let’s grant that a commenter is judging someone; to what lengths must they go to conceal that, in order to prevent others from predictably feeling judged, given that people aren’t idiots and can read subtext?
I think there’s something much more fundamental at stake here, which is that an intellectual forum that’s being held hostage to people’s feelings is intrinsically hampered and can’t be at the forefront of advancing the art of human rationality. If my post claims X, and a commenter says, “No, that’s wrong, actually not-X because Y”, it would be a non-sequitur for me to reply, “I’d prefer you engage with what I wrote with more curiosity and kindness.” Curiosity and kindness are just not logically relevant to the claim! (If I think the commenter has misconstrued what I wrote, I could just say that.) It needs to be possible to discuss ideas without getting tone-policed to death. Once you start playing this game of litigating feelings and feelings about other people’s feelings, there’s no end to it. The only stable Schelling point that doesn’t immediately dissolve into endless total war is to have rules and for everyone to take responsibility for their own feelings within the rules.
I don’t think this is an unrealistic superhumanly high standard. As you’ve noticed, I am myself a pretty emotional person and tend to wear my heart on my sleeve. There are definitely times as recently as, um, yesterday, when I procrastinate checking this website because I’m scared that someone will have said something that will make me upset. In that sense, I think I do have some empathy for people who say that bad comments make them less likely to use the website. It’s just that, ultimately, I think that my sensitivity and vulnerability is my problem. Censoring voices that other people are interested in hearing would be making it everyone else’s problem.
I think there’s something much more fundamental at stake here, which is that an intellectual forum that’s being held hostage to people’s feelings is intrinsically hampered and can’t be at the forefront of advancing the art of human rationality.
An intellectual forum that is not being “held hostage” to people’s feelings will instead be overrun by hostile actors who either are in it just to hurt people’s feelings, or who want to win through hurting people’s feelings.
It’s just that, ultimately, I think that my sensitivity and vulnerability is my problem.
Some sensitivity is your problem. Some sensitivity is the “problem” of being human and not reacting like Spock. It is unreasonable to treat all sensitivity as being the problem of the sensitive person.
Mm, I think sometimes I’d rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
This made my blood go cold, despite thinking it would be good if Said left LessWrong.
My first thoughts when I read “judge on the standard of whether the outcome is good” is that this lets you cherrypick your favorite outcomes without justifying them. My second is that it knowing if something is good can be very complicated even after the fact, so predicting it ahead of time is challenging even if you are perfectly neutral.
I think it’s good LessWrong(’s admins) allows authors to moderate their own posts (and I’ve used that to ban Said from my own posts). I think it’s good LessWrong mostly doesn’t allow explicit insults (and wish this was applied more strongly). I think it’s good LessWrong evaluates commenting patterns, not just individual comments. But “nothing that makes authors feel bad about bans” is way too far.
It’s extremely common for all judicial systems to rely on outcome assessments instead of process assessments! In many domains this is obviously the right standard! It is very common to create environments where someone can sue for damages and not just have the judgement be dependent on negligence (and both thresholds are indeed commonly relevant for almost any civil case).
Like sure, it comes with various issues, but it seems obviously wrong to me to request that no part of the LessWrong moderation process relies on outcome assessments.
Okay. But I nonetheless believe it’s necessary that we have to judge communication sometimes by outcomes rather than by process.
Like, as a lower stakes examples, sometimes you try to teasingly make a joke at your friend’s expense, but they just find it mean, and you take responsibility for that and apologize. Just because you thought you were behaving right and communicating well doesn’t mean you were, and sometimes you accept feedback from others that says you misjudged a situation. I don’t have all the rules written down such that if you follow them your friend will read your comments as intended, sometimes I just have to check.
Similarly sometimes you try to criticize an author, but they take it as implying you’ll push back whenever they enforce boundaries on LessWrong, and then you apologize and clarify that you do respect them enforcing boundaries in general but stand by the local criticism. (Or you don’t and then site-mods step in.) I don’t have all the rules written down such that if you follow them the author will read your comments as intended, sometimes I just have to check.
Obviously mod powers can be abused, and having to determine on a case by case basis is a power that can be abused. Obviously it involves judgment calls. I did not disclaim this, I’m happy for anyone to point it out, perhaps nobody has mentioned it so far in this thread so it’s worth making sure the consideration is mentioned. And yeah, if you’re asking, I don’t endorse “nothing that makes authors feel bad about bans”, and there are definitely situations where I think it would be appropriate for us to reverse someone’s bans (e.g. if someone banned all of the top 20 authors in the LW review, I would probably think this is just not workable on LW and reverse that).
Sure, but “is my friend upset” is very different than “is the sum total of all the positive and negative effects of this, from first order until infinite order, positive”
In-person I regularly read you as being deeply pained and barely able to contain strongly emotional and hostile outbursts.
with “Disagree”.
I have no idea how you could remotely know whether this is true, as I think you have never interacted with either Ben or Zack in person!
Also, it’s really extremely obviously true. Indeed, Zack frequently has the corresponding emotional and hostile outbursts, so it’s really extremely evident they are barely contained during a lot of it (since sometimes they do not end up contained, and then Zack apologizes for containing them and explains that this is difficult for him).
You can write lots of judgmental comments criticizing an author’s posts, and then they can ban you from their comments because they find engaging with you to be exhausting, and then you can make a shortform where you and your friends call them a coward, and then they stop using the mod tools (and other authors do too) out of a fear that using the mod tools will result in a group of people getting together to bully and call them names in front of the author’s peers. That’s a situation where authors become uncomfortable using their mod tools.
Here’s what confuses me about this stance: do an author’s posts on Less Wrong (especially non-frontpage posts) constitute “the author’s private space”, or do they constitute “public space”?
If the former, then the idea that things that Alice writes about Bob on her shortform (or in non-frontpage posts) can constitute “bullying”, or are taking place “in front of” third parties (who aren’t making the deliberate choice to go to Alice’s private space), is nonsense.
If the latter, then the idea that authors should have the right to moderate discussions that are happening in a public space is clearly inappropriate.
I understood the LW mods’ position to be the former—that an author’s posts are their own private space, within the LW ecosystem (which is why it makes sense to let them set their own separate moderation policy there). But then I can’t make any sense of this notion of “bullying”, as applied to comments written on an author’s shortform (or non-frontpage posts).
It seems to me that these two ideas are incompatible.
What I find hard to understand is why the mod team seems to think it’s good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced.
No judicial system in the world has ever arrived at the ability to have “neutrally enforced rules”, at least the way I interpret you to mean this. Case law is the standard in almost every legal tradition, and the US legal system relies heavily on things like “jury of your peers” type stuff to make judgements.
Intent frequently matters in legal decision. Cognitive state of mind matters for legal decisions. Judges go through years of training and are part of a long lineage of people who have built up various heuristics and principles about how to judge cases. Individual courts have their own culture and track record.
And that is for the US legal system, which is absolutely not capable of operating remotely to the kind of standard that allows people to curate social spaces or deal with tricky kinds of social rulings. No company could make cultural or hiring or business decisions based on the standard of the US legal system. Neither could any internet forum.
There is absolutely no chance we will ever be able to encodify LessWrong rules of conduct into a set of specific rules that can be neutrally judged by a third party. Zero chance. Give up. If that is something you need here, leave now. Feel free to try to build it for yourself.
I could see it being confusing because sometimes an author like Gordon is moderating you, and sometimes a site-mod like Habryka is moderating you, but they are using different standards, and the LW-mods are not typically endorsing the author standards as our own.
It’s not just confusing sometimes, it’s confusing basically all the time. It’s confusing even for me, even though I’ve spent all these years on Less Wrong, and have been involved in all of these discussions, and have worked on GreaterWrong, and have spent time thinking about moderation policies, etc., etc. For someone who is even a bit less “very on LW”[1]—it’s basically incomprehensible.
I mean, consider: whenever I comment on anything anywhere, on this website, I have to not only keep in mind the rules of LW (which I don’t actually know, because I can’t remember in what obscure, linked-from-nowhere-easily-findable, long, hard-to-parse post those rules are contained), and the norms of LW (which I understand only very vaguely, because they remain somewhere between “poorly explained” and “totally unexplained”), but also, in addition to those things, I have to keep in mind whose post I am commenting under, and somehow figure out from that not only what their stated “moderation policy” is (scare quotes because usually it’s not really a specification of a policy, it’s just sort of a vague allusion at a broad class of approaches to moderation policy), but also what their actual preferences are, and how they enforce those things.
(I mean, take this recent post. The “moderation policy” a.k.a. “commenting guidelines” are: “Reign of Terror—I delete anything I judge to be counterproductive”. What is that? That’s not anything. What is Nate going to judge to be “counterproductive”? I have no idea. How will this “policy” be applied? I have no idea. Does anyone besides Nate himself know how he’s going to moderate the comments on his posts? Probably not. Does Nate himself even know? Well, maybe he does, I don’t know the guy; but a priori, there’s a good chance that he doesn’t know. The only way to proceed here is to just assume that he’s going to be reasonable… but it is incredibly demoralizing to invest effort into writing some comments, only for them to be summarily deleted, on the basis of arbitrary rules you weren’t told of beforehand, or “norms” that are totally up to arbitrary interpretation, etc. The result of an environment like that is that people will treat commenting here as strictly a low-effort activity. Why bother to put time and thought into your comments, if “whoops, someone’s opaque whim dictates that your comments are now gone” is a strong possibility?)
The whole thing sort of works most of the time because most people on LW don’t take this “set your own moderation policy” stuff too seriously, and basically (both when posting and when commenting) treat the site as if the rules were something like what you’d find on a lightly moderated “nerdy” mailing list or classic-style discussion forum.
But that just results in the same sorts of “selective enforcement” situations as you get in any real-world legal regime that criminalizes almost everything and enforces almost nothing.
Not a direct response, but I want to take some point in this discussion (I think I said this to Zack in-person the other day) to say that, while some people are arguing that things should as a rule be collaborative and not offensive (e.g. to varying extents Gordon and Rafael), this is not the position that the LW mods are arguing for. We’re arguing that authors on LessWrong should be able to moderate their posts with different norms/standards from one another, and that there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
I could see it being confusing because sometimes an author like Gordon is moderating you, and sometimes a site-mod like Habryka is moderating you, but they are using different standards, and the LW-mods are not typically endorsing the author standards as our own. I even generally agree with many of the counterarguments that e.g. Zack makes against those norms being the best ones. Some of my favorite comments on this site are offensive (where ‘offensive’ is referring to Wei’s meaning of ‘lowering someone’s social status’).
What is currently the acceptable range of moderation norms/standards (according to the LW mod team)? For example if someone blatantly deletes/bans their most effective critics, is that acceptable? What if they instead subtly discourage critics (while being overtly neutral/welcoming) by selectively enforcing rules more stringently against their critics? What if they simply ban all “offensive” content, which as a side effect discourages critics (since as I mentioned earlier, criticism almostly inescapably implies offense)?
And what does “retribution or counter-punishment” mean? If I see an author doing one of the above, and question or criticize that in the comments or elsewhere, is that considered “retribution or counter-punishment” given that my comment/post is also inescapably offensive (status-lowering) toward the author?
I think the first answer is “Mostly people aren’t using this feature, and the few times people have used it it has not felt to us like abuse or strongly needing to be pushed back on” so I don’t have any examples to point to.
But I’ll quickly generate thoughts on each of the hypothetical scenarios you briefly gestured to.
It’d depend on how things played out. If Andrew writes a blogpost with a big new theory of rationality, and then Bob and Charlie and Dave all write decisive critiques and then their comments are deleted and banned from commenting on his posts, I think it’s quite plausible that they’ll write a new post together with the copy-paste of their comments and it’ll get more karma than the original. This seems like a good-enough outcome to me. On the other hand if Andrew only gets criticism from Bob, and then deletes Bob’s comments and bans him from commenting on his posts, and then Bob leaves the site, I would take more active action, such as perhaps removing Andrew’s ability to ban people, and reaching out to Bob to thank him for his comments and encourage him to return.
That sounds like there’d be some increased friction on criticism. Hopefully we’d try to notice it and counteract it, or hopefully the commenters who were having annoying experience being moderated would notice and move to shortform or posts and do their criticism from there. But plausibly there’d just be some persistent additional annoyances or costs that certain users would have to pay.
I mean, again, probably this would just be very incongruous with LessWrong and it wouldn’t really work and they’d have to ban like 30+ users because everyone wouldn’t get this and would keep doing things the author didn’t like, and the author wouldn’t eventually leave if they needed that sort of environment, or we’d step in after like 5 and say “this is kind of crazy, you have to stop doing this, it isn’t going to work out, we’re removing your ability to ban users”. So many of the good comments on LessWrong lower their interlocutor’s status in some way.
It means actions that predictably make the author feel that them using the ban feature in general is illegitimate or that using it will cause them to have their reputation attacked, regardless of reason or context, in response to them using the ban feature.
Many many writers on LessWrong are capable of critiquing a single instance of a ban while taking care to communicate that they are not pushing back on all instances of banning, and can also credibly offer support in other instances that are more reasonable.
Generally it is harder to signal this when you are complaining about your own banning. For in-person contexts (e.g. events) I generally spend effort to ensure that people do not feel any cost for not inviting me to events or spaces, and not expect that I will complain loudly or cause them to lose social status for it, and a similar (but not identical) heuristic applies here. If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
There is still good form and bad form to imposing costs on people for moderating their spaces, and costs imposed on people for moderating their spaces (based on disagreement or even trying to fix biases in the moderation) are the most common reason for good spaces not existing; moderation is unpleasant work, lots of people feel entitled to make strong social bids on you for your time and to threaten to attack your social standing, and I’ve seen many spaces degrade due to unwillingness to moderate. You should of course think about this if you are considering reliably complaining loudly every time anyone uses a ban feature on people.
Added: I hope you get a sense from reading this that your questions don’t have simple answers, but that the scenarios you describe require active steering depending on the dynamics at play. I am somewhat wary you will keep asking me a lot of short questions that, due to your inexperience moderating spaces, you will assume have simple answers, and I will have to do lots of work generating all the contexts to show how things play out, else Said or someone allied with him against him being moderated on LW will claim I am unable to answer the most basic of questions and this shows me to be either ignorant or incompetent. And, man, this is a lot of moderation discussion.
If I was in this circumstance, I would be pretty worried about my own biases, and ask neutral or potentially less biased parties whether there might be more charitable and reasonable hypotheses why that person doesn’t want to interact with me. If there isn’t though, why shouldn’t I complain and e.g. make it common knowledge that my valuable criticism is being suppressed? (Obviously I would also take into consideration social/political realities, not make enemies I can’t afford to make, etc.)
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost (to potential critics and people like me who really value criticism) of not knowing when their critiques might be unilaterally deleted or banned by post authors. I mean aside from the “benefit” of attracting/retaining the authors who demand such unilateral powers.
Aside from the above “benefit”, It seems like you’re currently getting the worst of both worlds: lack of significant usage and therefore potential positive effects, and lots of controversy when it is occasionally used. If you really thought this was an important feature for the long term health of the community, wouldn’t you do something to make it more popular? (Or have done it in the past 7 years since the feature came out?) But instead you (the mod team) seem content that few people use it, only coming out to defend the feature when people explicitly object to it. This only seems to make sense if the main motivation is again to attract/retain certain authors.
It seems like if you actually wanted or expected many people to use this feature, you would have written some guidelines on what people can and can’t do, or under what circumstances their moderation actions might be reversed by the site moderators. I don’t think I was expecting the answers to my questions to necessarily be simple, but rather that the answers already exist somewhere, at least in the form of general guidelines that might need to be interpreted to answer my specific questions.
I mean, mostly we’ve decided to give the people who complain about moderation a shot, and compensate by spending much much more moderation effort from the moderators. My guess is this has cost a large amount of counterfactual quality of the site, many contributors, etc.
In-general, I find argument of the form “so to the extend that LW hasn’t been destroyed, X can’t be that valuable” pretty weak. It’s very hard to assess the counterfactual, and “if not X, LessWrong would have been completely destroyed” is rarely the case for almost any X that is in dispute.
My guess is LW would be a lot better if more people felt comfortable moderating things, and in the present world, there are a lot of costs born by the site admins that wouldn’t be necessary otherwise.
What do you mean by this? Until I read this sentence, I saw you as giving the people who demand unilateral moderation powers a shot, and denying the requests of people like me to reduce such powers.
My not very confident guess at this point is that if it weren’t for people like me, you would have pushed harder for people to moderate their own spaces more, perhaps by trying to publicly encourage this? And why did you decide to go against your own judgment on it, given that “people who complain about moderation” have no particular powers, except the power of persuasion (we’re not even threatening to leave the site!), and it seems like you were never persuaded?
This seems implausible to me given my understanding of human nature (most people really hate to see/hear criticism) and history (few people can resist the temptation to shut down their critics when given the power and social license or cover to do so). If you want a taste of this, try asking DeepSeek some questions about the CCP.
But presumably you also know this (at least abstractly, but perhaps not as viscerally as I do, coming from a Chinese background, where even before the CCP, criticism in many situations was culturally/socially impossible), so I’m confused and curious why you believe what you do.
My guess is that you see a constant stream of bad comments, and wish you could outsource the burden of filtering them to post authors (or combine efforts to do more filtering). But as an occasional post author, my experience is that I’m not a reliable judge of what counts as a “bad comment”, e.g., I’m liable to view a critique as a low quality comment, only to change my mind later after seeing it upvoted and trying harder to understand/appreciate its point. Given this, I’m much more inclined to leave the moderation to the karma system, which seems to work well enough in leaving bad comments at low karma/visibility by not upvoting them, and even when it’s occasionally wrong, still provides a useful signal to me that many people share the same misunderstanding and it’s worth my time to try to correct (or maybe by engaging with it I find out that I still misjudged it).
But if you don’t think it works well enough… hmm I recall writing a post about moderation tech proposals in 2016 and maybe there has been newer ideas since then?
I mean, I have written like 50,000+ words about this at this point in various comment threads. About why I care about archipelagos, and why I think it’s hard and bad to try to have centralized control about culture, about how much people hate being in places with ambiguous norms, and many other things. I don’t fault you for not reading them all, but I have done a huge amount of exposition.
Because the only choice at this point would be to ban them, since they appear to be willing to take any remaining channel or any remaining opportunity to heap approximately as much scorn and snark and social punishment on anyone daring to do moderation they disagree with, and I value things like readthesequences.com and many other contributions from the relevant people enough that that seemed really costly and sad.
My guess is I will now do this, as it seems like the site doesn’t really have any other choice, and I am tired and have better things to do, but I think I was justified and right to be hesitant to do this for a while (though yes, in ex-post it would have obviously been better to just do that 5 years ago).
It seems to me there are plenty of options aside from centralized control and giving authors unilateral powers, and last I remember (i.e., at the end of this post) the mod team seems to be pivoting to other possibilities, some of which I would find much more reasonable/acceptable. I’m confused why you’re now so focused again on the model of authors-as-unilateral-moderators. Where have you explained this?
I have filled my interest in answering questions on this, so I’ll bow out and wish you good luck. Happy to chat some other time.
I don’t think we ever “pivoted to other possibilities” (Ray often makes posts with moderation things he is thinking about, and the post doesn’t say anything about pivoting). Digging up the exact comments on why ultimately there needs to be at least some authority vested in authors as moderators seems like it would take a while.
I meant pivot in the sense of “this doesn’t seem to be working well, we should seriously consider other possibilities” not “we’re definitely switching to a new moderation model”, but I now get that you disagree with Ray even about this.
Your comment under Ray’s post wrote:
This made me think you were also no longer very focused on the authors-as-unilateral-moderators model and was thinking more about subreddit-like models that Ray mentioned in his post.
BTW I’ve been thinking for a while that LW needs a better search, as I’ve also often been in the position being unable to find some comment I’ve written in the past.
Instead of one-on-one chats (or in addition to them), I think you should collect/organize your thoughts in a post or sequence, for a number of reasons including that you seem visibly frustrated that after having written 50k+ words on the topic, people like me still don’t know your reasons for preferring your solution.
Huh, ironically I now consider the AI Alignment Forum a pretty big mistake in how it’s structured (for reasons mostly orthogonal but not unrelated to this).
Agree.
I think I have elaborated non-trivially on my reasons in this thread, so I don’t really think it’s an issue of people not finding it.
I do still agree it would be good to do more sequences-like writing on it, though like, we are already speaking in the context of Ray having done that a bunch (referencing things like the Archipelago vision), and writing top-level content takes a lot of time and effort.
It’s largely an issue of lack of organization and conciseness (50k+ words is a minus, not a plus in my view), but also clearly an issue of “not finding it”, given that you couldn’t find an important comment of your own, one that (judging from your description of it) contains a core argument needed to understand your current insistence on authors-as-unilateral-moderators.
I’m having a hard time seeing how this reply is hooking up to what I wrote. I didn’t say critics, I spoke much more generally. If someone wants to keep their distance from you because you have bad body odor, or because they think your job is unethical, and you either don’t know this or disagree, it’s pretty bad social form to go around loudly complaining every time they keep their distance from you. It makes it more socially costly for them to act in accordance with their preferences and makes a bunch of unnecessary social conflict. I’m pretty sure this is obvious and this doesn’t change if you’ve suddenly developed a ‘criticism’ of them.
I mean, I think it pretty plausible that LW would be doing even better than it is with more people doing more gardening and making more moderated spaces within it, archipelago-style.
I read you questioning my honesty and motivations a bunch (e.g. you have a few times mentioning that I probably only care about this because of status reasons I cannot mention or to attract certain authors and that my behavior is not consistent with believing in users moderating their own posts being a good idea) which are of course fine hypotheses for you to consider. After spending probably over 40 hours writing this month explaining why I think authors moderating their posts is a good idea and making some defense of myself and my reasoning, I think I’ve done my duty in showing up to engage with this semi-prosecution for the time being, and will let ppl come to their own conclusions. (Perhaps I will write up a summary of the discussion at some point.)
Great, so all you need to do is make a rule specifying what speech constitutes “retribution” or “counterpunishment” that you want to censor on those grounds.
Maybe the rule could be something like, “No complaining about being banned by a specific user (but commenting on your own shortform strictly about the substance of a post that you’ve been banned from does not itself constitute complaining about the ban)” or “No arguing against the existence on the user ban feature except in designated moderation threads (which get algorithmically deprioritized in the new Feed).”
It’s your website! You have all the hard power! You can use the hard power to make the rules you want, and then the users of the website have a clear choice to either obey the rules or be banned from the site. Fine.
What I find hard to understand is why the mod team seems to think it’s good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced. Telling people to “stop optimizing in a fairly deep way” is not a rule because of how vague and potentially all-encompassing it is. Telling people to avoid “mak[ing] people feel judged or not” is not a rule because I don’t have control over how other people feel.
“Don’t tell people ‘I’m judging you about X’” is a rule. I can do that.
What I can’t do is convincingly pretend to be a person with a completely different personality such that people who are smart about subtext can’t even guess from subtle details of my writing style that I might privately be judging them.
I mean, maybe I could if I tried very hard? But I have too much self-respect to try. If the mod team wants to force temperamentally judgemental people to convincingly pretend to be non-judgemental, that seems really crazy.
I know, the mods didn’t say “We want temperamentally judgemental people to convincingly pretend to have a completely different personality” in those words; rather, Habryka said he wanted to “avoid a passive aggressive culture tak[ing] hold”. I just don’t see what the difference is supposed to be in practice.
Mm, I think sometimes I’d rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
A key question is: Are authors comfortable using the mod tools the site gives them to garden their posts?
You can write lots of judgmental comments criticizing an author’s posts, and then they can ban you from their comments because they find engaging with you to be exhausting, and then you can make a shortform where you and your friends call them a coward, and then they stop using the mod tools (and other authors do too) out of a fear that using the mod tools will result in a group of people getting together to bully and call them names in front of the author’s peers. That’s a situation where authors become uncomfortable using their mod tools. But I don’t know precisely what comment was wrong and what was wrong with it such that had it not happened the outcome would counterfactually not have obtained i.e. that you wouldn’t have found some other way to make the author uncomfortable using his mod tools (though we could probably all agree on some schelling lines).
Also I am hesitant to fully outlaw behavior that might sometimes be appropriate. Perhaps there are some situations where it’s appropriate to criticize someone on your shortform after they banned you. Or perhaps sometimes you should call someone a coward for not engaging with your criticism.
Overall I believe sometimes I will have to look at the outcome and see whether the gain in this situation was worth the cost, and directly give positive/negative feedback based on that.
Related to other things you wrote, FWIW I think you have a personality that many people would find uncomfortable interacting with a lot. In-person I regularly read you as being deeply pained and barely able to contain strongly emotional and hostile outbursts. I think just trying to ‘follow the rules’ might not succeed at making everyone feel comfortable interacting with you, even via text, if they feel a deep hostility from you to them that is struggling to contain itself with rules like “no explicit insults”, and sometimes the right choice for them will just be to not engage with you directly. So I think it is a hypothesis worth engaging with that you should work to change your personality somewhat.
To be clear I think (as Said has said) that it is worth people learning to be able to make space to engage with people like you who they find uncomfortable, because you raise many good ideas and points (and engaging with you is something I relatively happily do, and this is a way I have grown stronger relative to myself of 10 years ago), and I hope you find more success as I respect many of your contributions, but I think a great many people who have good points to contribute don’t have as much capacity as me to do this, and you will sometimes have to take some responsibility for navigating this.
A key reason to favor behavioral rules over trying to directly optimize outcomes (even granting that enforcement can’t be completely mechanized and there will always be some nonzero element of human judgement) is that act consequentialism doesn’t interact well with game theory, particularly when one of the consequences involved is people’s feelings.
If the popular kids in the cool kids’ club don’t like Goldstein and your only goal is to make sure that the popular kids feel comfortable, then clearly your optimal policy is to kick Goldstein out of the club. But if you have some other goal that you’re trying to pursue with the club that the popular kids and Goldstein both have a stake in, then I think you do have to try to evaluate whether Goldstein “did anything wrong”, rather than just checking that everyone feels comfortable. Just ensuring that everyone feels comfortable at all costs, without regard to the reasons why people feel uncomfortable or any notion that some reasons aren’t legitimate grounds for intervention, amounts to relinquishing all control to anyone who feels uncomfortable when someone else doesn’t behave exactly how they want.
Something I appreciate about the existing user ban functionality is that it is a rule-based mechanism. I have been persuaded by Achmiz and Dai’s arguments that it’s bad for our collective understanding that user bans prevent criticism, but at least it’s a procedurally “fair” kind of badness that I can tolerate, not completely arbitrary tyranny. The impartiality really helps. Do you really want to throw away that scrap of legitimacy in the name of optimizing outcomes even harder? Why?
But I’m not trying to make everyone feel comfortable interacting with me. I’m trying to achieve shared maps that reflect the territory.
A big part of the reason some of my recent comments in this thread appeal to an inability or justified disinclination to convincingly pretend to not be judgmental is because your boss seems to disregard with prejudice Achmiz’s denials that his comments are “intended to make people feel judged”. In response to that, I’m “biting the bullet”: saying, okay, let’s grant that a commenter is judging someone; to what lengths must they go to conceal that, in order to prevent others from predictably feeling judged, given that people aren’t idiots and can read subtext?
I think there’s something much more fundamental at stake here, which is that an intellectual forum that’s being held hostage to people’s feelings is intrinsically hampered and can’t be at the forefront of advancing the art of human rationality. If my post claims X, and a commenter says, “No, that’s wrong, actually not-X because Y”, it would be a non-sequitur for me to reply, “I’d prefer you engage with what I wrote with more curiosity and kindness.” Curiosity and kindness are just not logically relevant to the claim! (If I think the commenter has misconstrued what I wrote, I could just say that.) It needs to be possible to discuss ideas without getting tone-policed to death. Once you start playing this game of litigating feelings and feelings about other people’s feelings, there’s no end to it. The only stable Schelling point that doesn’t immediately dissolve into endless total war is to have rules and for everyone to take responsibility for their own feelings within the rules.
I don’t think this is an unrealistic superhumanly high standard. As you’ve noticed, I am myself a pretty emotional person and tend to wear my heart on my sleeve. There are definitely times as recently as, um, yesterday, when I procrastinate checking this website because I’m scared that someone will have said something that will make me upset. In that sense, I think I do have some empathy for people who say that bad comments make them less likely to use the website. It’s just that, ultimately, I think that my sensitivity and vulnerability is my problem. Censoring voices that other people are interested in hearing would be making it everyone else’s problem.
An intellectual forum that is not being “held hostage” to people’s feelings will instead be overrun by hostile actors who either are in it just to hurt people’s feelings, or who want to win through hurting people’s feelings.
Some sensitivity is your problem. Some sensitivity is the “problem” of being human and not reacting like Spock. It is unreasonable to treat all sensitivity as being the problem of the sensitive person.
This made my blood go cold, despite thinking it would be good if Said left LessWrong.
My first thoughts when I read “judge on the standard of whether the outcome is good” is that this lets you cherrypick your favorite outcomes without justifying them. My second is that it knowing if something is good can be very complicated even after the fact, so predicting it ahead of time is challenging even if you are perfectly neutral.
I think it’s good LessWrong(’s admins) allows authors to moderate their own posts (and I’ve used that to ban Said from my own posts). I think it’s good LessWrong mostly doesn’t allow explicit insults (and wish this was applied more strongly). I think it’s good LessWrong evaluates commenting patterns, not just individual comments. But “nothing that makes authors feel bad about bans” is way too far.
It’s extremely common for all judicial systems to rely on outcome assessments instead of process assessments! In many domains this is obviously the right standard! It is very common to create environments where someone can sue for damages and not just have the judgement be dependent on negligence (and both thresholds are indeed commonly relevant for almost any civil case).
Like sure, it comes with various issues, but it seems obviously wrong to me to request that no part of the LessWrong moderation process relies on outcome assessments.
Okay. But I nonetheless believe it’s necessary that we have to judge communication sometimes by outcomes rather than by process.
Like, as a lower stakes examples, sometimes you try to teasingly make a joke at your friend’s expense, but they just find it mean, and you take responsibility for that and apologize. Just because you thought you were behaving right and communicating well doesn’t mean you were, and sometimes you accept feedback from others that says you misjudged a situation. I don’t have all the rules written down such that if you follow them your friend will read your comments as intended, sometimes I just have to check.
Similarly sometimes you try to criticize an author, but they take it as implying you’ll push back whenever they enforce boundaries on LessWrong, and then you apologize and clarify that you do respect them enforcing boundaries in general but stand by the local criticism. (Or you don’t and then site-mods step in.) I don’t have all the rules written down such that if you follow them the author will read your comments as intended, sometimes I just have to check.
Obviously mod powers can be abused, and having to determine on a case by case basis is a power that can be abused. Obviously it involves judgment calls. I did not disclaim this, I’m happy for anyone to point it out, perhaps nobody has mentioned it so far in this thread so it’s worth making sure the consideration is mentioned. And yeah, if you’re asking, I don’t endorse “nothing that makes authors feel bad about bans”, and there are definitely situations where I think it would be appropriate for us to reverse someone’s bans (e.g. if someone banned all of the top 20 authors in the LW review, I would probably think this is just not workable on LW and reverse that).
Sure, but “is my friend upset” is very different than “is the sum total of all the positive and negative effects of this, from first order until infinite order, positive”
I don’t really know what we’re talking about right now.
Said, you reacted to this:
with “Disagree”.
I have no idea how you could remotely know whether this is true, as I think you have never interacted with either Ben or Zack in person!
Also, it’s really extremely obviously true. Indeed, Zack frequently has the corresponding emotional and hostile outbursts, so it’s really extremely evident they are barely contained during a lot of it (since sometimes they do not end up contained, and then Zack apologizes for containing them and explains that this is difficult for him).
Here’s what confuses me about this stance: do an author’s posts on Less Wrong (especially non-frontpage posts) constitute “the author’s private space”, or do they constitute “public space”?
If the former, then the idea that things that Alice writes about Bob on her shortform (or in non-frontpage posts) can constitute “bullying”, or are taking place “in front of” third parties (who aren’t making the deliberate choice to go to Alice’s private space), is nonsense.
If the latter, then the idea that authors should have the right to moderate discussions that are happening in a public space is clearly inappropriate.
I understood the LW mods’ position to be the former—that an author’s posts are their own private space, within the LW ecosystem (which is why it makes sense to let them set their own separate moderation policy there). But then I can’t make any sense of this notion of “bullying”, as applied to comments written on an author’s shortform (or non-frontpage posts).
It seems to me that these two ideas are incompatible.
No judicial system in the world has ever arrived at the ability to have “neutrally enforced rules”, at least the way I interpret you to mean this. Case law is the standard in almost every legal tradition, and the US legal system relies heavily on things like “jury of your peers” type stuff to make judgements.
Intent frequently matters in legal decision. Cognitive state of mind matters for legal decisions. Judges go through years of training and are part of a long lineage of people who have built up various heuristics and principles about how to judge cases. Individual courts have their own culture and track record.
And that is for the US legal system, which is absolutely not capable of operating remotely to the kind of standard that allows people to curate social spaces or deal with tricky kinds of social rulings. No company could make cultural or hiring or business decisions based on the standard of the US legal system. Neither could any internet forum.
There is absolutely no chance we will ever be able to encodify LessWrong rules of conduct into a set of specific rules that can be neutrally judged by a third party. Zero chance. Give up. If that is something you need here, leave now. Feel free to try to build it for yourself.
It’s not just confusing sometimes, it’s confusing basically all the time. It’s confusing even for me, even though I’ve spent all these years on Less Wrong, and have been involved in all of these discussions, and have worked on GreaterWrong, and have spent time thinking about moderation policies, etc., etc. For someone who is even a bit less “very on LW”[1]—it’s basically incomprehensible.
I mean, consider: whenever I comment on anything anywhere, on this website, I have to not only keep in mind the rules of LW (which I don’t actually know, because I can’t remember in what obscure, linked-from-nowhere-easily-findable, long, hard-to-parse post those rules are contained), and the norms of LW (which I understand only very vaguely, because they remain somewhere between “poorly explained” and “totally unexplained”), but also, in addition to those things, I have to keep in mind whose post I am commenting under, and somehow figure out from that not only what their stated “moderation policy” is (scare quotes because usually it’s not really a specification of a policy, it’s just sort of a vague allusion at a broad class of approaches to moderation policy), but also what their actual preferences are, and how they enforce those things.
(I mean, take this recent post. The “moderation policy” a.k.a. “commenting guidelines” are: “Reign of Terror—I delete anything I judge to be counterproductive”. What is that? That’s not anything. What is Nate going to judge to be “counterproductive”? I have no idea. How will this “policy” be applied? I have no idea. Does anyone besides Nate himself know how he’s going to moderate the comments on his posts? Probably not. Does Nate himself even know? Well, maybe he does, I don’t know the guy; but a priori, there’s a good chance that he doesn’t know. The only way to proceed here is to just assume that he’s going to be reasonable… but it is incredibly demoralizing to invest effort into writing some comments, only for them to be summarily deleted, on the basis of arbitrary rules you weren’t told of beforehand, or “norms” that are totally up to arbitrary interpretation, etc. The result of an environment like that is that people will treat commenting here as strictly a low-effort activity. Why bother to put time and thought into your comments, if “whoops, someone’s opaque whim dictates that your comments are now gone” is a strong possibility?)
The whole thing sort of works most of the time because most people on LW don’t take this “set your own moderation policy” stuff too seriously, and basically (both when posting and when commenting) treat the site as if the rules were something like what you’d find on a lightly moderated “nerdy” mailing list or classic-style discussion forum.
But that just results in the same sorts of “selective enforcement” situations as you get in any real-world legal regime that criminalizes almost everything and enforces almost nothing.
By analogy with “very online”