It’s indeed the case that I haven’t been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it’d had Twitter’s options for turning off commenting entirely.
So yes, I suppose that people could go ahead and make this decision without me. I haven’t been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.
because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience
Curious whether you have any guesses on what would make it seem like a sympathetic decision to the audience. My model here is that this is largely not really a technical problem, but more of a social problem (which is e.g. better worked towards by things like me writing widely read posts on moderation), though I still like trying to solve social problems with better technical solutions and am curious whether you have ideas (that are not “turn off commenting entirely”, which I do think is a bad idea for LW in particular).
I’m not sure what Eliezer is referring to, but my guess is that many of the comments that he would mark as “elaborate-misinterpretations”, I would regard as reasonable questions / responses, and I would indeed frown on Eliezer just deleting them. (Though also shrug, since the rules are that authors can delete whatever comments they want.)
Some examples that come to mind are this discussion with Buck and this discussion with Matthew Barnett, in which (to my reading of things) Eliezer seems to be weirdly missing what the other person is saying at least as much as they are missing what he is saying.
I from the frustration Eliezer expressed in those threads, I would guess that he would call these elaborate-misinterpretations.
My take is that there’s some kind of weird fuckyness about communicating about some of these topics where both sides feel exasperation that the other side is apparently obstinately mishearing them. I would indeed think it would be worse if the post author in posts like that just deleted the offending comments.
I currently doubt the Buck thread would qualify as such from Eliezer’s perspective (and agree with you there that in as much as Eliezer disagrees, he is wrong in that case).
IMO I do think it’s a pretty bad mark on LW’s reputation that posts like Matthew’s keep getting upvoted, with what seem to me like quite aggressively obtuse adversarial interpretations of what people are saying.
The existence of the latter unfortunately makes the former much harder to navigate.
I’m guessing that there are a lot enough people like me, who have such a strong prior on “a moderator shouldn’t mod their own threads, just like a judge shouldn’t judge cases involving themselves”, plus our own experiences showing that the alternative of forum-like moderation works well enough, that it’s impossible to overcome this via abstract argumentation. I think you’d need to present some kind of evidence that it really leads to better results than the best available alternative.
I’m guessing that there are a lot of people like me, who have such a strong prior on “a moderator shouldn’t mod their own threads, just like a judge shouldn’t judge cases involving themselves”
Nowhere on the whole wide internet works like that! Clearly the vast majority of people do not think that authors shouldn’t moderate their own threads. Practically nowhere on the internet do you even have the option for anything else.
Nowhere on the whole wide internet works like that! Clearly the vast majority of people do not think that authors shouldn’t moderate their own threads. Practically nowhere on the internet do you even have the option for anything else.
Where’s this coming from all of a sudden? Forums work like this, Less Wrong used to work like this. Data Secrets Lox still works like this. Most subreddits work like this. This whole thread is about how maybe the places that work like this have the right idea, so it’s a bit late in the game to open up with “they don’t exist and aren’t a thing anyone wants”.
Yes, Reddit is one of the last places on the internet where this is semi-common, but even there, most subreddits are moderated by people who are active posters, and there are no strong norms against moderators moderating responses to their own comments or posts.
I agree I overstated here and that there are some places on the internet where this is common practice, but it’s really a very small fraction of the internet these days. You might bemoan this as a fate of the internet, but it’s just really not how most of the world thinks content moderation works.
There is actually a significant difference between “Nowhere on the whole wide internet works like that!” and “few places work like that”. It’s not just a nitpick, because to support my point that it will be hard for Eliezer to get social legitimacy for freely exercising author mod power, I just need that there is a not too tiny group of people on the Internet who still prefers to have no author moderation (it can be small in absolute numbers, as long as it’s not near zero, since they’re likely to congregate at a place like LW that values rationality and epistemics). The fact that there are still even a few places on the Internet that works like this makes a big difference to how plausible my claim is.
I mean, I think no, if truly there is only a relatively small fraction of people like that around, we as the moderators can just ask those people to leave. Like, it’s fine if we have to ask hundreds of people to leave, the world is wide and big. If most of the internet is on board with not having this specific stipulation, then there is a viable LessWrong that doesn’t have those people.
No, I don’t “need” to do that. This is (approximately) my forum. If anything you “need” to present some kind of evidence that bridges the gap here! If you don’t like it build your own forum that is similarly good or go to a place where someone has built a forum that does whatever you want here.
The point of the post is not to convince everyone, there was never any chance of that, it’s to build enough shared understanding that people understand the principles of the space and can choose to participate or leave.
Ok I misunderstood your intentions for writing such posts. Given my new understanding, will you eventually move to banning or censoring people for expressing disapproval of what they perceive as bad or unfair moderation, even in their own “spaces”? I think if you don’t, then not enough people will voluntarily leave or self-censor such expressions of disapproval to get the kind of social legitimacy that Eliezer and you desire, but if you do, I think you’ll trigger an even bigger legitimacy problem because there won’t be enough buy-in for such bans/censorship among the LW stakeholders.
If you don’t like it build your own forum that is similarly good or go to a place where someone has built a forum that does whatever you want here.
This is a terrible idea given the economy of scale in such forums.
Given my new understanding, will you eventually move to banning or censoring people for expressing disapproval of what they perceive as bad or unfair moderation, even in their own “spaces”?
I mean, I had a whole section in the Said post about how I do think it’s a dick move to try to socially censure people for using any moderation tools. If someone keeps trying to create social punishment for people doing that, then yeah, I will ask them to please do that somewhere else but here, or more likely, leave the content up but reduce the degree to which things like the frontpage algorithm feed attention to it. I don’t know how else any norms on the site are supposed to bottom out.
Top-level posts like this one seem totally fine. Like, if someone wants to be like “I am not trying to force some kind of social punishment on anyone, but I do think there is a relevant consideration here, but I also understand this has been litigated a bunch and I am not planning to currently reopen that”, then that’s fine. Of course you did kind of reopen it, which to be clear I think is fine on the margin, but yeah, I would totally ask you to stop if you did that again and again.
I mean, I had a whole section in the Said post about how I do think it’s a dick move to try to socially censure people for using any moderation tools.
I think an issue you’ll face is that few people will “try to socially censure people for using any moderation tools”, but instead different people will express disapproval of different instances of perceived bad moderation, which adds up to that a large enough share of all author moderation gets disapproved of (or worse blow up into big dramas), such that authors like Eliezer do not feel there’s enough social legitimacy to really use them.
(Like in this case I’m not following the whole site and trying to censure anyone who does author moderation, but speaking up because I myself got banned!)
And Eliezer’s comment hints why this would happen: the comments he wants to delete are often highly upvoted. If you delete such comments, and the mod isn’t a neutral third party, of course a lot of people will feel it was wrong/unfair and want to express disapproval, but they probably won’t be the same people each time.
How are you going to censor or deprioritize such expressions of disapproval? By manual mod intervention? AI automation? Instead of going to that trouble and cause a constant stream of resentment from people feeling wronged and silenced, it seems better for Eliezer to just mark the comments that misinterpret him as misinterpretations (maybe through the react system or a more prominent variation of it, if he doesn’t want to just reply to each one and say “this is a misinterpretation). One idea is reacts from the OP author are distinguished or more prominently displayed somehow.
I think an issue you’ll face is that few people will “try to socially censure people for using any moderation tools”,
No, my guess is this is roughly the issue. I think the vast majority of complaints here tend to be centered in a relatively small group of people who really care.
It’s not a particularly common expectation that people have about how the internet works, as I have said in other places in this thread. I don’t think the rest of the internet gets these kinds of things right, but I also don’t think that there will be an unquenchable torrent of continuous complaints that will create a landscape of perpetual punishment for anyone trying to use moderation tools.
I think if you resolve a few disagreements, and moderate a relatively small number of people, you end up at an equlibrium that seems a bunch saner to me.
The rest of the Internet is also not about rationality though. If Eliezer started deleting a lot of highly upvoted comments questioning/criticizing him (even if based on misinterpretations like Eliezer thinks), I bet there will be plenty of people making posts like “look at how biased Eliezer is being here, trying to hide criticism from others!” These posts themselves will get upvoted quite easily, so this will be a cheap/easy way to get karma/status, as well as (maybe subconsciously) getting back at Eliezer for the perceived injustice.
I don’t know if Eliezer is still following this thread or not, but I’m also curious why he thinks there isn’t enough social legitimacy to exercise his mod powers freely, whether its due to a similar kind of expectation.
I mean, yes, these dynamics have caused many people, including myself, to want to leave LessWrong. It sucks. I wish people stopped. Not all moderation is censorship. The fact that it universally gets treated as such by a certain population of LW commenters is one of the worst aspects of this site (and one of the top reasons why in the absence of my own intervention into reviving the site, this site would likely no longer exist at all today).
I think we can fix it! I think it unfortunately takes a long time, and continuous management and moderation to slowly build trust that indeed you can moderate things without suddenly everyone going insane. Maybe there are also better technical solutions.
Claiming this is about “rationality” feels like mostly a weird rhetorical move. I don’t think it’s rational to pretend that unmoderated discussion spaces somehow outperform moderated ones. As has been pointed out many times, 4Chan is not the pinnacle of internet discussion. Indeed, I think largely across the internet, more moderation results in higher trust and higher quality discussions (not universally, you can definitely go on a censorious banning spree as a moderator and try to skew consensus in various crazy ways, but by and large, as a correlation).
This is indeed an observation so core to LessWrong that Well-Kept Gardens Die By Pacifism was, as far as I can tell, a post necessary for LessWrong to exist at all.
I’m not saying this, nor are the hypothetical people in my prediction saying this.
Claiming this is about “rationality” feels like mostly a weird rhetorical move.
We are saying that there is an obvious conflict of interest when an author removes a highly upvoted piece of criticism. Humans being biased when presented with COIs is common sense, so connecting such author moderation with rationality is natural, not a weird rhetorical move.
The rest of your comment seems to be forgetting that I’m only complaining about authors having COI when it comes to moderation, not about all moderation in general. E.g. I have occasional complaints like about banning Said, but generally approve of the job site moderators are doing on LW. Or if you’re not forgetting this, then I’m not getting your point. E.g.
I don’t think it’s rational to pretend that unmoderated discussion spaces somehow outperform moderated ones.
I have no idea how this related to my actual complaint.
We are saying that there is an obvious conflict of interest when an author removes a highly upvoted piece of criticism. Humans being biased when presented with COIs is common sense, so connecting such author moderation with rationality is natural, not a weird rhetorical move.
Look, we’ve had these conversations.
I am saying the people who are moderating the spaces have the obvious information advantage about their own preferences and about what it’s actually like to engage with an interlocutor, plus the motivation advantage to actually deal with it. “It’s common sense that the best decisions get made by people with skin in the game and who are most involved with the actual consequences of the relevant decision”. And “it’s common sense that CEOs of organizations make hiring and firing decisions for the people they work with, boards don’t make good firing decisions, the same applies to forums and moderators”.
This is a discussion as old as time in business and governance and whatever. Framing your position as “common sense” is indeed just a rhetorical move, and I have no problem framing the opposite position in just as much of an “obvious” fashion. Turns out, neither position obviously dominates by common sense! Smart people exist on both sides of this debate. I am not against having it again, and I have my own takes on it, but please don’t try to frame this as some kind of foregone conclusion in which you have the high ground.
The rest of your comment seems to be forgetting that I’m only complaining about authors having COI when it comes to moderation, not about all moderation in general.
I was (and largely am) modeling you as being generically opposed to basically any non-spam bans or deletions on the site. Indeed, as I think we’ve discussed, the kind of positions that you express in this thread suggest to me that you should be more opposed to site-wide bans than author bans (since site-wide bans truly make counterveiling perspectives harder to find instead of driving them from the comment sections to top-level posts).
If you aren’t against site-wide bans, I do think that’s a pretty different situation! I certainly didn’t feel like I was empowered to moderate more in our conversations on moderation over the last year. It seemed to me you wanted both less individual author moderation, and less admin moderation for anything that isn’t spam. Indeed, I am pretty sure, though I can’t find it, that you said that LW moderation really should only establish a very basic level of protection against spam and basic norms of discourse, but shouldn’t do much beyond that, but I might be misremembering.
If you do support moderation, I would be curious about you DMing me some example of users you think we should ban, or non-spam comments we should delete. My current model of you doesn’t really think those exist.
I think you’re right that I shouldn’t have latched onto the first analogy I thought of. Here’s a list of 11 (for transparency, analogies 3-10 were generated by Gemini 3.0 Pro, though some may have appeared in previous discussions.):
The CEO & The Corporation
The Judge & The Courtroom
The Dinner Party Host
The University Classroom / Professor
The Conference Breakout Session
Open Source / GitHub Maintainer
The Stand-Up Comedian & The Heckler
The Art Gallery Opening
Graffiti on a Private House
The Town Hall vs Private Meetings
The Hypothetical HOA
I decided to put detailed analysis of these analogies in this collapsed section, as despite extensive changes by me from the original AI-generated text, it doesn’t quite read like my style. Also, it might be too much text and my summary/conclusions below may be sufficient to convey the main points.
1. The CEO & The Corporation
Analogy: A Forum Post is a “Project.” The Author is the CEO; the Commenter is an Employee. The CEO needs the power to fire employees who disrupt the vision, and the Board (Admins) should defer to the CEO’s judgment.
Disanalogy: In a corporation, the Board cannot see daily operations, creating information asymmetry; on a forum, Admins see the exact same content as the Author. A CEO has a smaller conflict of interest when firing an employee, because they are judged primarily by the company’s financial performance rather than the perception of their ideas. If they fire an employee who makes a good criticism, they might subsequently look better to others, but the company’s performance will suffer.
Conclusion: The analogy fails because the Author lacks the financial alignment of a CEO and possesses no special private information that the Admins lack.
2. The Judge & The Courtroom
Analogy: When there is a conflict in the physical world, we find disinterested parties to make enforceable judgments, even if the cost is very high. When the cost is too high, we either bear it (wait forever for a trial date) or give up the possibility of justice or enforcement, rather than allow an interested party to make such judgments.
Disanalogy: A courtroom has the power of Coercion (forcing the loser to pay, go to jail, or stop doing something). A Forum Author only has the power of Dissociation (refusing to host the commenter’s words). We require neutral judges to deprive people of rights/property; we do not require neutral judges to decide who we associate with.
Conclusion: Dissociation has its own externalities (e.g., hiding of potentially valuable criticism), which we usually regulate via social pressure, or legitimize via social approval, but you don’t want this and therefore need another source of legitimacy.
3. The Dinner Party Host
Analogy: A Post is a private social gathering. The Author is the Host. The Host can kick out a guest for any reason, such as to curate the conversation to his taste.
Disanalogy: In the real world, if a Host kicks out a guest that everyone else likes, the other attendees would disapprove and often express such disapproval. There is no mechanism to then suppress such disapproval, like you seek.
Conclusion: You want the power of the Host without the social accountability that naturally regulates a Host’s behavior.
4. The University Classroom / Professor
Analogy: The Author is a Subject Matter Expert (Professor). The Commenter is a Student. The Dean (Admin) lets the Professor silence students to prevent wasting class time.
Disanalogy: A classroom has a “scarce microphone” (only one person can speak at a time); a forum has threaded comments (parallel discussions), so the “Student” isn’t stopping the “Professor” from teaching. Additionally, LessWrong participants are often peers, not Student/Teacher.
Conclusion: The justification for silencing students (scarcity of time/attention, asymmetry of expertise) does not apply to LW.
5. The Conference Breakout Session
Analogy: The Author is like an Organizer who “rented the room” at a convention. The Organizer has the right to eject anyone to accomplish his goals.
Disanalogy: Just like the Dinner Party, an Organizer would almost never eject someone who is popular with their table. If they did, the table would likely revolt.
Conclusion: This analogy fails to justify the action of overriding the local consensus (upvotes) of the participants in that sub-thread.
6. Open Source / GitHub Maintainer
Analogy: A Post is a Code Repository. A Comment is a Pull Request. The Maintainer has the absolute right to close a Pull Request as “Wontfix” or “Off Topic” to keep the project focused.
Disanalogy: In Open Source, a rejected Pull Request is Closed, not Deleted. The history remains visible, easy to find, and auditable. Also, this situation is similar to the CEO in that the maintainer is primarily judged on how well their project works, with the “battle of ideas” aspect a secondary consideration.
Conclusion: You are asking for more power for an Author than a Maintainer, and a Maintainer has less COI for reasons similar to a CEO.
7. The Stand-Up Comedian & The Heckler
Analogy: The Author is a Comedian. The Commenter is a Heckler. Even if the Heckler is funny (Upvoted), they are stealing the show. The Club (Admins) protects the Comedian because writing a set is high-effort.
Disanalogy: In a physical club, the Heckler interrupts the show. In a text forum, the comment sits below the post. The audience can consume the Author’s “set” without interference before reading the comment.
Conclusion: The physical constraints that justify silencing a heckler do not exist in a digital text format.
8. The Art Gallery Opening
Analogy: The Post is a Painting. The Upvoted Comment is a Critic framing the art negatively. The Artist removes the Critic to preserve the intended Context of the work.
Disanalogy: Art is about aesthetics and subjective experience. LessWrong is ostensibly about intellectual progress and truth-seeking.
Conclusion: Prioritizing “Context” over “Criticism” serves goals that are not LW’s.
9. Graffiti on a Private House
Analogy: A Post is the Author’s House. A Comment is graffiti. The homeowner has the right to scrub the wall (Delete) so neighbors don’t see it.
Disanalogy: This is purely about property value and aesthetics.
Conclusion: Again the goals are too different for the analogy to work.
10. The Town Hall vs Private Meetings
Analogy: In the real world we have both town halls (Neutral Moderator) and meetings in private houses (Author Control). We can have both.
Disanalogy: Even in the discussions inside a private house, social norms usually prevent a host from kicking out a guest who is making popular points that everyone else agrees with.
Conclusion: The social legitimacy that you seek doesn’t exist here either.
11. The Hypothetical HOA
Analogy: A hypothetical residential community with HOA rules that say, a homeowner not only has the right to kick out any guests during meetings/parties, but no one is allowed to express disapproval for exercising such powers. Anyone who buys a house in the community is required to sign the HOA agreement.
Disanalogy: There are already many people in the LW community who never “signed” such agreements.
Conclusion: You are proposing to ask many (“hundreds”) of the existing “homeowners” (some of whom have invested years of FTE work into site participation) to leave, which is implausible in this hypothetical analogy.
Overall Conclusions
None of the analogies are perfect, but we can see some patterns when considering them together.
Neutral, disinterested judgement is a standard social technology for gaining legitimacy. In the case of courts, it is used to legitimize coercion, an otherwise illegitimate activity that would trigger much opposition. In the case of a forum, it can be used to legitimize (or partly legitimize) removing/hiding/deprioritizing popular/upvoted critiques.
Some analogies provide a potential new idea for gaining such legitimacy in some cases: relatively strong and short external feedback loops like financial performance (for the CEO) and real-world functionality (for the open source maintainer) can legitimize greater unilateral discretion. This can potentially work on certain types of posts, but most lack such short-term feedback.
In other cases, suppression of dissent is legitimized for specific reasons clearly not applicable to LW, such as clear asymmetry of expertise between speaker and audience, or physical constraints.
In the remaining cases, the equivalent of author moderation (e.g., kicking out a houseguest) is legitimized only by social approval, but this is exactly what you and Eliezer want to avoid.
Having gone through all of these possible analogies, I think my intuition for judges/courts being the closest analogy to moderation is correct after all: in both cases, disinterested judgement seems to be the best or only way to gain social legitimacy for unpopular decisions.
However, this exercise also made me realize that in most of the real world we do allow people to unilaterally exercise the power of dissociation, as long as it’s regulated by social approval or disapproval, and this may be a reasonable prior for LW.
Perhaps the strongest argument (for my most preferred policy of no author moderation, period) at this point is that unlike the real world, we lack clear boundaries to signal when we are entering a “private space”, nor is it clear how much power/responsibility the authors are supposed to have, with the site mods also being around. The result is a high cost of background confusion (having to track different people’s moderation policies/styles or failing to do so and being surprised) as well as high probability of drama/distraction whenever it is used, because people disagree or are confused about the relevant norms.
On the potential benefits side, the biggest public benefits of moderation can only appear when it’s against the social consensus, otherwise karma voting would suffice as a kind of moderation. But in this case clearly social approval can’t be a source of legitimacy, and if disinterested judgment and external feedback are also unavailable as sources of legitimacy, then it’s hard to see what can work. (Perhaps worth reemphasizing here, I think this intuitive withholding of legitimacy is correct, due to the high chance of abuse when none of these mechanisms are available.) This leaves the private psychological benefit to the author, which is something I can’t directly discuss (due to not having a psychology that wants to “hard” moderate others), and can only counter with the kind of psychological cost to author-commenters like myself, as described in the OP.
@Ben Pace I’m surprised that you’re surprised. Where did your impression that I generally disapprove of the job site moderators are doing on LW come from, if you can recall?
In the last year I’d guess you’ve written over ten thousand words complaining about LW moderation over dozens of comments, and I don’t recall you ever saying anything positive about the moderation? I recall once said that you won’t leave the site over our actions (so far), which sounds like you’ll bear our moderation, but is quite different from saying it’s overall good.
Thanks, to clarify some more in case it’s helpful, I think I’ve only complained about 2 things, the Said banning and the author moderation policy, and the word count was just from a lot of back and forth, not the number issues I’ve had with the mod team? A lot of what you do is just invisible to me, like the user pre-filtering that habryka mentioned and the routine moderation work, but I assume you’re doing a good job on them, as I’m pretty happy with the general LW environment as far as lack of spam, generally good user behavior, and not seeing many complaints about being unfairly moderated by the mod team, etc.
Found my quote about not leaving:
My response to this is that I don’t trust people to garden their own space, along with other reasons to dislike the ban system. I’m not going to leave LW over it though, but just be annoyed and disappointed at humanity whenever I’m reminded of it.
Yeah I think you misinterpreted it. I was just trying to say that unlike those who got what they wanted (the author mod policy) by leaving or threatening to leave, I’m explicitly not using this threat as a way to get what I want. It was a way to claim the moral high ground I guess. Too bad the message misfired.
rsaarelm gave an excellent explanation early on about how the issue seems to be an incompatibility between forum mechanics and blog mechanics, rather than an issue with moderation itself. It would be unfortunate if the point was overlooked because it misunderstood as “moderation is bad”.
It is fair to say that a blog with a policy “I’ll moderate however I like, if you don’t like it leave” works fine. It’s the default and implicit.
When it comes to a forum system with as many potential posters as there are commenters then “If you don’t like it leave” is the implicit ultimatum from every single user to every other. But if the feed system that governs content exposure doesn’t allow leaving individual posters, then the only thing that could be left is the entire forum.
This is why all other significant sites with a many producers → many consumers model all have unsubscribe, mute and/or block features. It helps ensure a few weeds in the Well-Kept Garden don’t drive away all the plants with low toxin tolerance.
It sounds like—particularly from testimony from habryka and Eliezer—moving to a more meta-blog like system is/was critical to lesswrong being viable. Which means leaning in to that structure and fully implementing the requisite features seems like an easy way to improve the experience of everyone.
I think you’d need to present some kind of evidence that it really leads to better results than the best available alternative.
I am perhaps misreading, but think this sentence should be interpreted as “if you want to convince [the kind of people that I’m talking about], then you should do [X, Y, Z].” Not “I unconditionally demand that you do [X, Y, Z].”
This comment seems like a too-rude response to someone who (it seems to me) is politely expressing and discussing potential problems. The rudeness seems accentuated by the object level topic.
Curious whether you have any guesses on what would make it seem like a sympathetic decision to the audience
Off-the-cuff idea, probably a bad on:
Stopping short of “turning off commenting entirely”, being able to make comments to a given post subject to a separate stage of filtering/white-listing. The white-listing criteria are set by the author and made public. Ideally, the system is also not controlled by the author directly, but by someone the author expects to be competent at adhering to those criteria (perhaps an LLM, if they’re competent enough at this point).
The system takes direct power out of the author’s hands. They still control the system’s parameters, but there’s a degree of separation now. The author is not engaging in “direct” acts of “tyranny”.
It’s made clear to readers that the comments under a given post have been subject to additional selection, whose level of bias they can estimate by reading the white-listing criteria.
The white-listing criteria are public. Depending on what they are, they can be (a) clearly sympathetic, (b) principled-sounding enough to decrease the impression of ad-hoc acts of tyranny even further.
(Also, ideally, the system doing the selection doesn’t care about what the author wants beyond what they specified in the criteria, and is thus an only boundedly and transparently biased arbiter.)
The commenters are clearly made aware that there’s no guarantee their comments on this post will be accepted, so if they decide to spend time writing them, they know what they’re getting into (vs. bitterness-inducing sequence where someone spends time on a high-effort comment that then gets deleted).
There’s no perceived obligation to respond to comments the author doesn’t want to respond to, because they’re rejected (and ideally the author isn’t even given the chance to read them).
There are no “deleting a highly-upvoted comment” events with terrible optics.
Probably this is still too censorship-y, though? (And obviously doesn’t solve the problem where people make top-level takedown posts in which all the blacklisted criticism is put and then highly upvoted. Though maybe that’s not going to be as bad and widespread as one might fear.)
It’s indeed the case that I haven’t been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it’d had Twitter’s options for turning off commenting entirely.
So yes, I suppose that people could go ahead and make this decision without me. I haven’t been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.
Curious whether you have any guesses on what would make it seem like a sympathetic decision to the audience. My model here is that this is largely not really a technical problem, but more of a social problem (which is e.g. better worked towards by things like me writing widely read posts on moderation), though I still like trying to solve social problems with better technical solutions and am curious whether you have ideas (that are not “turn off commenting entirely”, which I do think is a bad idea for LW in particular).
I’m not sure what Eliezer is referring to, but my guess is that many of the comments that he would mark as “elaborate-misinterpretations”, I would regard as reasonable questions / responses, and I would indeed frown on Eliezer just deleting them. (Though also shrug, since the rules are that authors can delete whatever comments they want.)
Some examples that come to mind are this discussion with Buck and this discussion with Matthew Barnett, in which (to my reading of things) Eliezer seems to be weirdly missing what the other person is saying at least as much as they are missing what he is saying.
I from the frustration Eliezer expressed in those threads, I would guess that he would call these elaborate-misinterpretations.
My take is that there’s some kind of weird fuckyness about communicating about some of these topics where both sides feel exasperation that the other side is apparently obstinately mishearing them. I would indeed think it would be worse if the post author in posts like that just deleted the offending comments.
I currently doubt the Buck thread would qualify as such from Eliezer’s perspective (and agree with you there that in as much as Eliezer disagrees, he is wrong in that case).
IMO I do think it’s a pretty bad mark on LW’s reputation that posts like Matthew’s keep getting upvoted, with what seem to me like quite aggressively obtuse adversarial interpretations of what people are saying.
The existence of the latter unfortunately makes the former much harder to navigate.
I’m guessing that there are
a lotenough people like me, who have such a strong prior on “a moderator shouldn’t mod their own threads, just like a judge shouldn’t judge cases involving themselves”, plus our own experiences showing that the alternative of forum-like moderation works well enough, that it’s impossible to overcome this via abstract argumentation. I think you’d need to present some kind of evidence that it really leads to better results than the best available alternative.Nowhere on the whole wide internet works like that! Clearly the vast majority of people do not think that authors shouldn’t moderate their own threads. Practically nowhere on the internet do you even have the option for anything else.
Where’s this coming from all of a sudden? Forums work like this, Less Wrong used to work like this. Data Secrets Lox still works like this. Most subreddits work like this. This whole thread is about how maybe the places that work like this have the right idea, so it’s a bit late in the game to open up with “they don’t exist and aren’t a thing anyone wants”.
Yes, Reddit is one of the last places on the internet where this is semi-common, but even there, most subreddits are moderated by people who are active posters, and there are no strong norms against moderators moderating responses to their own comments or posts.
I agree I overstated here and that there are some places on the internet where this is common practice, but it’s really a very small fraction of the internet these days. You might bemoan this as a fate of the internet, but it’s just really not how most of the world thinks content moderation works.
There is actually a significant difference between “Nowhere on the whole wide internet works like that!” and “few places work like that”. It’s not just a nitpick, because to support my point that it will be hard for Eliezer to get social legitimacy for freely exercising author mod power, I just need that there is a not too tiny group of people on the Internet who still prefers to have no author moderation (it can be small in absolute numbers, as long as it’s not near zero, since they’re likely to congregate at a place like LW that values rationality and epistemics). The fact that there are still even a few places on the Internet that works like this makes a big difference to how plausible my claim is.
I mean, I think no, if truly there is only a relatively small fraction of people like that around, we as the moderators can just ask those people to leave. Like, it’s fine if we have to ask hundreds of people to leave, the world is wide and big. If most of the internet is on board with not having this specific stipulation, then there is a viable LessWrong that doesn’t have those people.
[ belabor → bemoan? ]
No, I don’t “need” to do that. This is (approximately) my forum. If anything you “need” to present some kind of evidence that bridges the gap here! If you don’t like it build your own forum that is similarly good or go to a place where someone has built a forum that does whatever you want here.
The point of the post is not to convince everyone, there was never any chance of that, it’s to build enough shared understanding that people understand the principles of the space and can choose to participate or leave.
Ok I misunderstood your intentions for writing such posts. Given my new understanding, will you eventually move to banning or censoring people for expressing disapproval of what they perceive as bad or unfair moderation, even in their own “spaces”? I think if you don’t, then not enough people will voluntarily leave or self-censor such expressions of disapproval to get the kind of social legitimacy that Eliezer and you desire, but if you do, I think you’ll trigger an even bigger legitimacy problem because there won’t be enough buy-in for such bans/censorship among the LW stakeholders.
This is a terrible idea given the economy of scale in such forums.
I mean, I had a whole section in the Said post about how I do think it’s a dick move to try to socially censure people for using any moderation tools. If someone keeps trying to create social punishment for people doing that, then yeah, I will ask them to please do that somewhere else but here, or more likely, leave the content up but reduce the degree to which things like the frontpage algorithm feed attention to it. I don’t know how else any norms on the site are supposed to bottom out.
Top-level posts like this one seem totally fine. Like, if someone wants to be like “I am not trying to force some kind of social punishment on anyone, but I do think there is a relevant consideration here, but I also understand this has been litigated a bunch and I am not planning to currently reopen that”, then that’s fine. Of course you did kind of reopen it, which to be clear I think is fine on the margin, but yeah, I would totally ask you to stop if you did that again and again.
I think an issue you’ll face is that few people will “try to socially censure people for using any moderation tools”, but instead different people will express disapproval of different instances of perceived bad moderation, which adds up to that a large enough share of all author moderation gets disapproved of (or worse blow up into big dramas), such that authors like Eliezer do not feel there’s enough social legitimacy to really use them.
(Like in this case I’m not following the whole site and trying to censure anyone who does author moderation, but speaking up because I myself got banned!)
And Eliezer’s comment hints why this would happen: the comments he wants to delete are often highly upvoted. If you delete such comments, and the mod isn’t a neutral third party, of course a lot of people will feel it was wrong/unfair and want to express disapproval, but they probably won’t be the same people each time.
How are you going to censor or deprioritize such expressions of disapproval? By manual mod intervention? AI automation? Instead of going to that trouble and cause a constant stream of resentment from people feeling wronged and silenced, it seems better for Eliezer to just mark the comments that misinterpret him as misinterpretations (maybe through the react system or a more prominent variation of it, if he doesn’t want to just reply to each one and say “this is a misinterpretation). One idea is reacts from the OP author are distinguished or more prominently displayed somehow.
No, my guess is this is roughly the issue. I think the vast majority of complaints here tend to be centered in a relatively small group of people who really care.
It’s not a particularly common expectation that people have about how the internet works, as I have said in other places in this thread. I don’t think the rest of the internet gets these kinds of things right, but I also don’t think that there will be an unquenchable torrent of continuous complaints that will create a landscape of perpetual punishment for anyone trying to use moderation tools.
I think if you resolve a few disagreements, and moderate a relatively small number of people, you end up at an equlibrium that seems a bunch saner to me.
The rest of the Internet is also not about rationality though. If Eliezer started deleting a lot of highly upvoted comments questioning/criticizing him (even if based on misinterpretations like Eliezer thinks), I bet there will be plenty of people making posts like “look at how biased Eliezer is being here, trying to hide criticism from others!” These posts themselves will get upvoted quite easily, so this will be a cheap/easy way to get karma/status, as well as (maybe subconsciously) getting back at Eliezer for the perceived injustice.
I don’t know if Eliezer is still following this thread or not, but I’m also curious why he thinks there isn’t enough social legitimacy to exercise his mod powers freely, whether its due to a similar kind of expectation.
I mean, yes, these dynamics have caused many people, including myself, to want to leave LessWrong. It sucks. I wish people stopped. Not all moderation is censorship. The fact that it universally gets treated as such by a certain population of LW commenters is one of the worst aspects of this site (and one of the top reasons why in the absence of my own intervention into reviving the site, this site would likely no longer exist at all today).
I think we can fix it! I think it unfortunately takes a long time, and continuous management and moderation to slowly build trust that indeed you can moderate things without suddenly everyone going insane. Maybe there are also better technical solutions.
Claiming this is about “rationality” feels like mostly a weird rhetorical move. I don’t think it’s rational to pretend that unmoderated discussion spaces somehow outperform moderated ones. As has been pointed out many times, 4Chan is not the pinnacle of internet discussion. Indeed, I think largely across the internet, more moderation results in higher trust and higher quality discussions (not universally, you can definitely go on a censorious banning spree as a moderator and try to skew consensus in various crazy ways, but by and large, as a correlation).
This is indeed an observation so core to LessWrong that Well-Kept Gardens Die By Pacifism was, as far as I can tell, a post necessary for LessWrong to exist at all.
I’m not saying this, nor are the hypothetical people in my prediction saying this.
We are saying that there is an obvious conflict of interest when an author removes a highly upvoted piece of criticism. Humans being biased when presented with COIs is common sense, so connecting such author moderation with rationality is natural, not a weird rhetorical move.
The rest of your comment seems to be forgetting that I’m only complaining about authors having COI when it comes to moderation, not about all moderation in general. E.g. I have occasional complaints like about banning Said, but generally approve of the job site moderators are doing on LW. Or if you’re not forgetting this, then I’m not getting your point. E.g.
I have no idea how this related to my actual complaint.
Look, we’ve had these conversations.
I am saying the people who are moderating the spaces have the obvious information advantage about their own preferences and about what it’s actually like to engage with an interlocutor, plus the motivation advantage to actually deal with it. “It’s common sense that the best decisions get made by people with skin in the game and who are most involved with the actual consequences of the relevant decision”. And “it’s common sense that CEOs of organizations make hiring and firing decisions for the people they work with, boards don’t make good firing decisions, the same applies to forums and moderators”.
This is a discussion as old as time in business and governance and whatever. Framing your position as “common sense” is indeed just a rhetorical move, and I have no problem framing the opposite position in just as much of an “obvious” fashion. Turns out, neither position obviously dominates by common sense! Smart people exist on both sides of this debate. I am not against having it again, and I have my own takes on it, but please don’t try to frame this as some kind of foregone conclusion in which you have the high ground.
I was (and largely am) modeling you as being generically opposed to basically any non-spam bans or deletions on the site. Indeed, as I think we’ve discussed, the kind of positions that you express in this thread suggest to me that you should be more opposed to site-wide bans than author bans (since site-wide bans truly make counterveiling perspectives harder to find instead of driving them from the comment sections to top-level posts).
If you aren’t against site-wide bans, I do think that’s a pretty different situation! I certainly didn’t feel like I was empowered to moderate more in our conversations on moderation over the last year. It seemed to me you wanted both less individual author moderation, and less admin moderation for anything that isn’t spam. Indeed, I am pretty sure, though I can’t find it, that you said that LW moderation really should only establish a very basic level of protection against spam and basic norms of discourse, but shouldn’t do much beyond that, but I might be misremembering.
If you do support moderation, I would be curious about you DMing me some example of users you think we should ban, or non-spam comments we should delete. My current model of you doesn’t really think those exist.
I think you’re right that I shouldn’t have latched onto the first analogy I thought of. Here’s a list of 11 (for transparency, analogies 3-10 were generated by Gemini 3.0 Pro, though some may have appeared in previous discussions.):
The CEO & The Corporation
The Judge & The Courtroom
The Dinner Party Host
The University Classroom / Professor
The Conference Breakout Session
Open Source / GitHub Maintainer
The Stand-Up Comedian & The Heckler
The Art Gallery Opening
Graffiti on a Private House
The Town Hall vs Private Meetings
The Hypothetical HOA
I decided to put detailed analysis of these analogies in this collapsed section, as despite extensive changes by me from the original AI-generated text, it doesn’t quite read like my style. Also, it might be too much text and my summary/conclusions below may be sufficient to convey the main points.
1. The CEO & The Corporation
Analogy: A Forum Post is a “Project.” The Author is the CEO; the Commenter is an Employee. The CEO needs the power to fire employees who disrupt the vision, and the Board (Admins) should defer to the CEO’s judgment.
Disanalogy: In a corporation, the Board cannot see daily operations, creating information asymmetry; on a forum, Admins see the exact same content as the Author. A CEO has a smaller conflict of interest when firing an employee, because they are judged primarily by the company’s financial performance rather than the perception of their ideas. If they fire an employee who makes a good criticism, they might subsequently look better to others, but the company’s performance will suffer.
Conclusion: The analogy fails because the Author lacks the financial alignment of a CEO and possesses no special private information that the Admins lack.
2. The Judge & The Courtroom
Analogy: When there is a conflict in the physical world, we find disinterested parties to make enforceable judgments, even if the cost is very high. When the cost is too high, we either bear it (wait forever for a trial date) or give up the possibility of justice or enforcement, rather than allow an interested party to make such judgments.
Disanalogy: A courtroom has the power of Coercion (forcing the loser to pay, go to jail, or stop doing something). A Forum Author only has the power of Dissociation (refusing to host the commenter’s words). We require neutral judges to deprive people of rights/property; we do not require neutral judges to decide who we associate with.
Conclusion: Dissociation has its own externalities (e.g., hiding of potentially valuable criticism), which we usually regulate via social pressure, or legitimize via social approval, but you don’t want this and therefore need another source of legitimacy.
3. The Dinner Party Host
Analogy: A Post is a private social gathering. The Author is the Host. The Host can kick out a guest for any reason, such as to curate the conversation to his taste.
Disanalogy: In the real world, if a Host kicks out a guest that everyone else likes, the other attendees would disapprove and often express such disapproval. There is no mechanism to then suppress such disapproval, like you seek.
Conclusion: You want the power of the Host without the social accountability that naturally regulates a Host’s behavior.
4. The University Classroom / Professor
Analogy: The Author is a Subject Matter Expert (Professor). The Commenter is a Student. The Dean (Admin) lets the Professor silence students to prevent wasting class time.
Disanalogy: A classroom has a “scarce microphone” (only one person can speak at a time); a forum has threaded comments (parallel discussions), so the “Student” isn’t stopping the “Professor” from teaching. Additionally, LessWrong participants are often peers, not Student/Teacher.
Conclusion: The justification for silencing students (scarcity of time/attention, asymmetry of expertise) does not apply to LW.
5. The Conference Breakout Session
Analogy: The Author is like an Organizer who “rented the room” at a convention. The Organizer has the right to eject anyone to accomplish his goals.
Disanalogy: Just like the Dinner Party, an Organizer would almost never eject someone who is popular with their table. If they did, the table would likely revolt.
Conclusion: This analogy fails to justify the action of overriding the local consensus (upvotes) of the participants in that sub-thread.
6. Open Source / GitHub Maintainer
Analogy: A Post is a Code Repository. A Comment is a Pull Request. The Maintainer has the absolute right to close a Pull Request as “Wontfix” or “Off Topic” to keep the project focused.
Disanalogy: In Open Source, a rejected Pull Request is Closed, not Deleted. The history remains visible, easy to find, and auditable. Also, this situation is similar to the CEO in that the maintainer is primarily judged on how well their project works, with the “battle of ideas” aspect a secondary consideration.
Conclusion: You are asking for more power for an Author than a Maintainer, and a Maintainer has less COI for reasons similar to a CEO.
7. The Stand-Up Comedian & The Heckler
Analogy: The Author is a Comedian. The Commenter is a Heckler. Even if the Heckler is funny (Upvoted), they are stealing the show. The Club (Admins) protects the Comedian because writing a set is high-effort.
Disanalogy: In a physical club, the Heckler interrupts the show. In a text forum, the comment sits below the post. The audience can consume the Author’s “set” without interference before reading the comment.
Conclusion: The physical constraints that justify silencing a heckler do not exist in a digital text format.
8. The Art Gallery Opening
Analogy: The Post is a Painting. The Upvoted Comment is a Critic framing the art negatively. The Artist removes the Critic to preserve the intended Context of the work.
Disanalogy: Art is about aesthetics and subjective experience. LessWrong is ostensibly about intellectual progress and truth-seeking.
Conclusion: Prioritizing “Context” over “Criticism” serves goals that are not LW’s.
9. Graffiti on a Private House
Analogy: A Post is the Author’s House. A Comment is graffiti. The homeowner has the right to scrub the wall (Delete) so neighbors don’t see it.
Disanalogy: This is purely about property value and aesthetics.
Conclusion: Again the goals are too different for the analogy to work.
10. The Town Hall vs Private Meetings
Analogy: In the real world we have both town halls (Neutral Moderator) and meetings in private houses (Author Control). We can have both.
Disanalogy: Even in the discussions inside a private house, social norms usually prevent a host from kicking out a guest who is making popular points that everyone else agrees with.
Conclusion: The social legitimacy that you seek doesn’t exist here either.
11. The Hypothetical HOA
Analogy: A hypothetical residential community with HOA rules that say, a homeowner not only has the right to kick out any guests during meetings/parties, but no one is allowed to express disapproval for exercising such powers. Anyone who buys a house in the community is required to sign the HOA agreement.
Disanalogy: There are already many people in the LW community who never “signed” such agreements.
Conclusion: You are proposing to ask many (“hundreds”) of the existing “homeowners” (some of whom have invested years of FTE work into site participation) to leave, which is implausible in this hypothetical analogy.
Overall Conclusions
None of the analogies are perfect, but we can see some patterns when considering them together.
Neutral, disinterested judgement is a standard social technology for gaining legitimacy. In the case of courts, it is used to legitimize coercion, an otherwise illegitimate activity that would trigger much opposition. In the case of a forum, it can be used to legitimize (or partly legitimize) removing/hiding/deprioritizing popular/upvoted critiques.
Some analogies provide a potential new idea for gaining such legitimacy in some cases: relatively strong and short external feedback loops like financial performance (for the CEO) and real-world functionality (for the open source maintainer) can legitimize greater unilateral discretion. This can potentially work on certain types of posts, but most lack such short-term feedback.
In other cases, suppression of dissent is legitimized for specific reasons clearly not applicable to LW, such as clear asymmetry of expertise between speaker and audience, or physical constraints.
In the remaining cases, the equivalent of author moderation (e.g., kicking out a houseguest) is legitimized only by social approval, but this is exactly what you and Eliezer want to avoid.
Having gone through all of these possible analogies, I think my intuition for judges/courts being the closest analogy to moderation is correct after all: in both cases, disinterested judgement seems to be the best or only way to gain social legitimacy for unpopular decisions.
However, this exercise also made me realize that in most of the real world we do allow people to unilaterally exercise the power of dissociation, as long as it’s regulated by social approval or disapproval, and this may be a reasonable prior for LW.
Perhaps the strongest argument (for my most preferred policy of no author moderation, period) at this point is that unlike the real world, we lack clear boundaries to signal when we are entering a “private space”, nor is it clear how much power/responsibility the authors are supposed to have, with the site mods also being around. The result is a high cost of background confusion (having to track different people’s moderation policies/styles or failing to do so and being surprised) as well as high probability of drama/distraction whenever it is used, because people disagree or are confused about the relevant norms.
On the potential benefits side, the biggest public benefits of moderation can only appear when it’s against the social consensus, otherwise karma voting would suffice as a kind of moderation. But in this case clearly social approval can’t be a source of legitimacy, and if disinterested judgment and external feedback are also unavailable as sources of legitimacy, then it’s hard to see what can work. (Perhaps worth reemphasizing here, I think this intuitive withholding of legitimacy is correct, due to the high chance of abuse when none of these mechanisms are available.) This leaves the private psychological benefit to the author, which is something I can’t directly discuss (due to not having a psychology that wants to “hard” moderate others), and can only counter with the kind of psychological cost to author-commenters like myself, as described in the OP.
@Ben Pace I’m surprised that you’re surprised. Where did your impression that I generally disapprove of the job site moderators are doing on LW come from, if you can recall?
In the last year I’d guess you’ve written over ten thousand words complaining about LW moderation over dozens of comments, and I don’t recall you ever saying anything positive about the moderation? I recall once said that you won’t leave the site over our actions (so far), which sounds like you’ll bear our moderation, but is quite different from saying it’s overall good.
Thanks, to clarify some more in case it’s helpful, I think I’ve only complained about 2 things, the Said banning and the author moderation policy, and the word count was just from a lot of back and forth, not the number issues I’ve had with the mod team? A lot of what you do is just invisible to me, like the user pre-filtering that habryka mentioned and the routine moderation work, but I assume you’re doing a good job on them, as I’m pretty happy with the general LW environment as far as lack of spam, generally good user behavior, and not seeing many complaints about being unfairly moderated by the mod team, etc.
Found my quote about not leaving:
Yeah I think you misinterpreted it. I was just trying to say that unlike those who got what they wanted (the author mod policy) by leaving or threatening to leave, I’m explicitly not using this threat as a way to get what I want. It was a way to claim the moral high ground I guess. Too bad the message misfired.
rsaarelm gave an excellent explanation early on about how the issue seems to be an incompatibility between forum mechanics and blog mechanics, rather than an issue with moderation itself. It would be unfortunate if the point was overlooked because it misunderstood as “moderation is bad”.
It is fair to say that a blog with a policy “I’ll moderate however I like, if you don’t like it leave” works fine. It’s the default and implicit.
When it comes to a forum system with as many potential posters as there are commenters then “If you don’t like it leave” is the implicit ultimatum from every single user to every other. But if the feed system that governs content exposure doesn’t allow leaving individual posters, then the only thing that could be left is the entire forum.
This is why all other significant sites with a many producers → many consumers model all have unsubscribe, mute and/or block features. It helps ensure a few weeds in the Well-Kept Garden don’t drive away all the plants with low toxin tolerance.
It sounds like—particularly from testimony from habryka and Eliezer—moving to a more meta-blog like system is/was critical to lesswrong being viable. Which means leaning in to that structure and fully implementing the requisite features seems like an easy way to improve the experience of everyone.
I am perhaps misreading, but think this sentence should be interpreted as “if you want to convince [the kind of people that I’m talking about], then you should do [X, Y, Z].” Not “I unconditionally demand that you do [X, Y, Z].”
This comment seems like a too-rude response to someone who (it seems to me) is politely expressing and discussing potential problems. The rudeness seems accentuated by the object level topic.
Off-the-cuff idea, probably a bad on:
Stopping short of “turning off commenting entirely”, being able to make comments to a given post subject to a separate stage of filtering/white-listing. The white-listing criteria are set by the author and made public. Ideally, the system is also not controlled by the author directly, but by someone the author expects to be competent at adhering to those criteria (perhaps an LLM, if they’re competent enough at this point).
The system takes direct power out of the author’s hands. They still control the system’s parameters, but there’s a degree of separation now. The author is not engaging in “direct” acts of “tyranny”.
It’s made clear to readers that the comments under a given post have been subject to additional selection, whose level of bias they can estimate by reading the white-listing criteria.
The white-listing criteria are public. Depending on what they are, they can be (a) clearly sympathetic, (b) principled-sounding enough to decrease the impression of ad-hoc acts of tyranny even further.
(Also, ideally, the system doing the selection doesn’t care about what the author wants beyond what they specified in the criteria, and is thus an only boundedly and transparently biased arbiter.)
The commenters are clearly made aware that there’s no guarantee their comments on this post will be accepted, so if they decide to spend time writing them, they know what they’re getting into (vs. bitterness-inducing sequence where someone spends time on a high-effort comment that then gets deleted).
There’s no perceived obligation to respond to comments the author doesn’t want to respond to, because they’re rejected (and ideally the author isn’t even given the chance to read them).
There are no “deleting a highly-upvoted comment” events with terrible optics.
Probably this is still too censorship-y, though? (And obviously doesn’t solve the problem where people make top-level takedown posts in which all the blacklisted criticism is put and then highly upvoted. Though maybe that’s not going to be as bad and widespread as one might fear.)