Banning Said Achmiz (and broader thoughts on moderation)
It’s been roughly 7 years since the LessWrong user-base voted on whether it’s time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.[1] Today I am declaring defeat on that goal and am giving him a 3 year ban.
What follows is an explanation of the models of moderation that convinced me this is a good idea, the history of past moderation actions we’ve taken for Said, and some amount of case law that I derive from these two. If you just want to know the moderation precedent, you can jump straight there.
I think few people have done as much to shape the culture of LessWrong as Said. More than 50% of the time when I would ask posters, commenters and lurkers about their models of LessWrong culture, they’d say some version of either:
Of all the places on the internet, LessWrong is a place that really forces you to get your arguments together. It’s very much a no-bullshit culture, and I think this is one of the things that makes it one of the most valuable forums on the internet.
Or
Man, posting on LessWrong seems really unrewarding. You show up, you put a ton of effort into a post, and at the end the comment section will tear apart some random thing that isn’t load bearing for your argument, isn’t something you consider particularly important, and whose discussion doesn’t illuminate what you are trying to communicate, all the while implying that they are superior in their dismissal of your irrational and dumb ideas.
And frequently when I dig into how they formed these impressions, a comment by Said would be at least heavily involved in that.
I think both of these perspectives are right. LessWrong is a unique place on the internet where bad ideas do get torn apart in ways that are rare and valuable, and also a place where there is a non-trivial chance that your comment section gets derailed by someone making some extremely confident assumption about what you intended to say, followed by a pile of sneering dismissal.[2]
I am overall making this decision to ban Said with substantial sadness. As is evident by me spending hundreds of hours over the years trying to resolve this via argument and soft-touch moderation, this was very far from an obvious choice. This post itself was also many dozens of hours of work, and I hope it illuminates some of the ways this decision was made, what it means for the future of LessWrong, and how it will affect future moderation. I apologize for the length.
The sneer attractor
One of the recurring attractors of the modern internet, dominating many platforms, subreddits, and subcultures, is the sneer attractor. Exemplified in my mind by places like RationalWiki, the eponymous “SneerClub”, but also many corners of Reddit and of course 4chan. At its worst, that culture looks like this:
Sociologically, my sense is this culture comes from a mixture of the following two dynamics:
Conflationary alliances, which conflate between all the different ways why something is bad. The key component of making good sneer club criticism is to never actually say out loud what your problem is. Sneerers say something that the reader can fill in with whatever they think the problem is, which allows them to establish an appearance of consensus, without a shared model. When pressed, sneerers hide behind irony or deflect.[3]
A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don’t need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.
Since the sneer attractor is one of the biggest and most destructive attractors of the modern internet, I worry about LessWrong also being affected by its dynamics. I think we are unlikely to ever become remotely as bad as SneerClub, or most of Reddit or Twitter, but I do see similar cultural dynamics rear their head on LessWrong. If these dynamics were to get worse, the thing I would mostly expect to see is the site quietly dying; fewer people venturing new/generative content, fewer people checking the site in hopes of such content, and an eventual ghost town of boring complaints about the surrounding scene, with links.
But before I go into that, let’s discuss the sneer attractor’s mirror image:
The LinkedIn attractor
In the other corners of the internet and the rest of the world, especially in the land of professional communities, we have what I will call the “LinkedIn attractor”. In those communities saying anything bad about another community member is frowned upon. Disputes are supposed to be kept private. When someone intends to run an RCT on which doctors in your hospital are most effective, you band together and refuse to participate, because establishing performance metrics would hurt the unity of your community.
Since anything but abstract approval is risky in those communities, a usual post in those communities looks like this largely vacuous engagement[4]:
And at the norms level like this (cribbed from an interesting case study of the “Obama Campaign Alumni” Facebook group descending into this attractor):
This cultural attractor is not mutually exclusive with the sneer attractor. Indeed, the LinkedIn attractor appears to be the memetically most successful way groups relate to their ingroup members, while the sneer attractor governs how they relate to their outgroups.
The dynamics behind the LinkedIn attractor seem mechanistically straightforward. I think of them as “mutual reputation protection alliances”.
In almost every professional context I’ve been in, these alliances manifest as a constant stream of agreements—”I say good things about you, if you say good things about me.”
This makes sense. It’s unlikely I would benefit from people spreading negative information about you, and we would both clearly benefit from protecting each other’s reputation. So a natural equilibrium emerges where people gather many of these mutual reputation protection alliances, ultimately creating groups with strong commitments to protect each other’s reputation and the group’s reputation in the eyes of the rest of the world.
Of course, the people trying to use reputation to navigate the world — to identify who is trustworthy — are much more diffuse and aren’t party to these negotiations. But their interests weigh heavily enough that some ecosystem pressure exists for antibodies to mutual reputation protection alliances to develop (such as rating systems for Uber drivers, as opposed to taxi cartels where feedback to individuals is nearly impossible to aggregate).
How this relates to LessWrong
Ok, now why am I going on this digression about The Sneer Attractor and the LinkedIn Attractor? The reason is because I think that much of the heatedness of moderation discussions related to Said, and people’s fears, have been routing through people being worried that LessWrong will end up in either the Sneer Attractor or the LinkedIn Attractor.
As I’ve talked with many people with opinions on Said’s comments on the site, a recurring theme has been that Said is what prevents LessWrong from falling into the LinkedIn attractor. Said, in many people’s minds, is the bearer of a flag like this:
“Just because you are hurt by, and anxious about others criticizing you or your ideas, doesn’t mean we are going to accommodate you. It is the responsibility of your audience to determine what they think of you and your contributions.
You do not own your reputation. Every individual owns their own judgment of you.
You can shape it by doing good or bad things, but you do not get to shape it by preventing me and others from openly discussing you and your contributions.”
And I really care about this flag too. Indeed, much of the decisions I have made around LessWrong have been to foster a culture that understands and rallies behind this flag. LessWrong is not LinkedIn, and LessWrong is not the EA Forum, and that is good and important.
And I do think Said provides a shield. Having Said comment on LessWrong posts, and having those comments be upvoted, helps against sliding down the attractor towards LinkedIn.
But on the other hand, I notice that in myself a lot of what I am most worried about is the Sneer Attractor. For LessWrong to become a place that can’t do much but to tear things down. Where criticism is vague and high-level and relies on conflationary alliances to get traction, but does not ultimately strengthen what it criticizes or who reads the criticism. Filled with comments that aims to make the readers and the voters feel superior to all those fools who keep saying wrong things, despite not equipping readers to say any less wrong things themselves.
And I do think Said moves LessWrong substantially towards that path. When Said is at his worst, he writes comments like this:
This, to be clear, is still better than the SneerClub comment visible above. For example when asked to clarify, Said obliges:
But the overall effect on the culture is still there, and the thread still results in Benquo eventually disengaging in frustration intending to switch his moderation guidelines to “Reign of terror” and deleting any future similar comment threads, as Said (as far as I can tell) refuses to do much cognitive labor in the rest of the thread until Benquo runs out of energy.
So, to get this conversation started[5] and to maybe give people a bit more trust that I am tracking some things they care about: “Yes, a lot of the world is broken and stuck in an equilibrium of people trying to punish others for saying anything that might reflect badly on anyone else, in endless cycles of mutual reputation protection that make it hard to know what is fake and what is real, and yes, LessWrong, as most things on the internet, is at risk of falling into that attractor. I am tracking this, I care a lot about it, and even knowing that, I think it’s the right call to ban Said.”
Now that this is out of the way, I think we can talk in more mechanistic terms about what is going wrong in comment threads involving Said, and maybe even learn some things about online moderation.
Weaponized obtuseness and asymmetric effort ratios
An excerpt from a recent Benquo post on the Said moderation decision:
Said is annoying, both because his demands for rigor don’t seem prioritized reasonably, and because he’s simultaneously insulting and rude, dismissive of others’ feelings around being “insulted,” and sensitive to insults himself. He’s also disagreeable. I asked Zack for a list of Said’s best comments (see email), and they’re pretty much all procedural criticisms or calls for procedural rigor seemingly with no sense of proportion. In the spirit of his “show me the cake” principle, I don’t see the cake there. On the other hand, he’s a leading contributor to GreaterWrong, which makes this site more usable.
I concur with much of this. To get more concrete, in my experience a breakdown of the core dynamics that make comment threads with Said rarely worth it looks something like this:[6]
Said will write a top level comment that will read like an implicit claim that you have violated some social norm in the things you have written for which you deserve to be punished (though this will not be said explicitly), or if not that, make you look negligent by not answering an innocuous open-seeming question
You will try to address this claim by writing some kind of long response or explanation, answering his question or providing justification on some point
Said will dismiss your response as being totally insufficient, confused, or proving the very point he was trying to make
You will try to clarify more, while Said will continue to make insinuations that your failure to respond properly validates whatever judgment he is invoking
Motivated commenters/authors will go up a level and ask “by what standard are you trying to invoke a negative judgment here?”[7]
Said will deny and all such invocation of standards or judgment, saying he is (paraphrased) “purely talking on the object level and not trying to make any implicit claims of judgment or low-status or any such kind”
After all of this you are left questioning your own sanity, try a bit to respond more on the object-level, and ultimately give up feeling dejected and like a lot of people on LessWrong hate you. You probably don’t post again.
In the more extreme cases, someone will try to prosecute this behavior and reach out to the moderators, or make a top-level post or quick take about it. Whoever does this quickly finds out that the moderators feel approximately as powerless to stop this cycle as they are. This leaves you even more dejected.
With non-trivial probability your post or comment ends up hosting a 100+ comment thread with detailed discussion of Said’s behavior and moderation norms and whether it’s ever OK to ban anyone, in which voting and commenting is largely dominated by the few people who care much more than average about banning and censorship. You feel an additional pang of guilt and concern about how many people you might have upset with your actions, and how much time you might have wasted.
Now, I think it is worth asking what the actual issue with the comments above is. Why do they produce this kind of escalation?
Asymmetric effort ratios and isolated demands for rigor
A key dynamic in many threads with Said is that the critic has a pretty easy job at each step. First of all, they have little to lose. They need to make no positive statements and explain no confusing phenomena. All they need to do is to ask questions, or complain about the imprecision of some definition. If the author can answer compellingly, well, then they can take credit for how they helped elicit an explanation to a confusion that clearly many people must have had. And if the author cannot answer compellingly, then even better, then the critic has properly identified and prosecuted bad behavior and excised the bad ideas that otherwise would have polluted the commons. At the end of the day, are you really going to fault someone for just asking questions? What kind of totalitarian state are you trying to create here?
The critic can disengage at any point. No one faults a commenter for suddenly disappearing or not giving clear feedback on whether a response satisfied them. The author, on the other hand, does usually feel responsible for reacting and responding to any critique made of his ideas, which he dared to put so boldly and loudly in front of the public eye.
My best guess is that the usual ratio of “time it takes to write a critical comment” to “time it takes to respond to it to a level that will broadly be accepted well” is about 5x. This isn’t in itself a problem in an environment with lots of mutual trust and trade, but in an adversarial context it means that it’s easily possible to run a DDOS attack on basically any author whose contributions you do not like by just asking lots of questions, insinuating holes or potential missing considerations, and demanding a response, approximately independently of the quality of their writing.
For related musings see the Scott Alexander classic Beware Isolated Demands For Rigor.
Maintaining strategic ambiguity about any such dynamics
Of course, commenters on LessWrong are not dumb, and have read Scott Alexander, and have vastly more patience than most commenters on the internet, and so many of them will choose to dissect and understand what is going on.
The key mechanism that shields Said from the accountability that would accompany such analysis is his care to avoid making explicit claims about the need for the author to respond, or the implicit judgment associated with his comments. In any given comment thread, each question is phrased as to be ambiguous between a question driven by curiosity, and a question intended to expose the author’s hypocrisy.
This ambiguity is not without healthy precedent. I have seen healthy math departments and science departments in which a prodding question might be phrased quite politely, and phrased ambiguously between a matter of personal confusion and the intention of pointing out a flaw in a proof.
“Can you explain to me how you got from Lemma C to Proof D? It seems like you are invoking an assumption here I can’t quite understand”
is a common kind of question. And I think overall fine, appropriate, healthy.
That said, most of the time, when I was in those environments, I could tell what was going on, and I mostly knew that other people could tell as well. If someone repeatedly asked questions in a way that did clearly indicate an understanding of a flaw in the provided proofs or arguments, but kept insisting on only getting there via Socratic questioning, they would lose points over time. And if they kept asking probing questions in each seminar that were easily answered, with each question taking up space and bandwidth, then they would quickly lose lots of points and asked to please interrupt less. And furthermore, the tone of voice would often make it clear whether the question asked was more on the genuine curiosity side, or the suggested criticism side.
But here on the internet, in the reaches of an online forum, with...
people coming in and out,
the popularity of topics waxing and waning with the fashions of the internet,
few common-knowledge creating events,
the bandwidth limited to text,
voting in most niche threads dominated by a very small subset of people who you can’t see but shape what gets approval nevertheless,
and branching comment threads with no natural limitation on the bandwidth or volume of requests that can be made of an author,
...the mechanisms that productively channeled this behavior no longer work.
And so here, where you can endlessly deflect any accusations with little risk that common-knowledge can be built that you are wasting time, or making repeated bids for social censure that fail to get accepted, by just falling back on saying “look, all I am doing is asking questions and asking for clarification, clearly that is not a crime”, there is no natural limit to how much heckling you can do. You alone could be the death of a whole subculture, if you are just persistent enough.
And so, at the heart of all of this, is either a deep obliviousness, or more likely, the strategic disarmament of opposition by denying load-bearing subtext (or anything else that might obviously allow prosecution) in these interactions.
And unfortunately, this does succeed. My guess is here on LessWrong better than most places, because we have a shared belief in the power of explicit reasoning, and we have learned an appropriate fear of focusing on subtext with a culture where debate is supposed to focus on the object level claims, not whatever status dynamics are going on, and I think this is good and healthy most of the time. But the purpose of those norms is not to completely eschew analysis and evaluation of the underlying status dynamics, but simply to separate them from the object level claims (I also think it’s good to have some norms against focusing too much on status dynamics and claims in total, so I think a lot of the generic hesitation is justified, which I elaborate on a bit later).
But I think in doing so, part by selection, part by training, we have created an environment where trying to police the subtext and status-dynamics surrounding conversations gets met with fear and counter-reactions, that make moderation and steering close very difficult.[8]
Crimes that are harder to catch should be more harshly punished
A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:
A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.
If someone can gain in-expectation dollars by committing some crime (which has negative externalities of dollars), with a probability of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment () be greater than , i.e. .
Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.
In this case, a core component of the pattern of plausibly deniable aggression that I think is present in much of Said’s writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways)[9], the probability of catching someone in ambiguous aggression is much lower.
Concentration of force and the trouble with anonymous voting
An under-discussed aspect of LessWrong is how voting affects culture, author expectations and conversational dynamics. Voting is anonymous, even to admins (the only cases where we look at votes is when we are investigating mass-downvoting or sockpuppetting or other kinds of extreme voting abuse). Now, does that mean that everyone is free to vote however they want?
The answer is a straightforward “no”. Ultimately, voting is a form of participation on the site that can be done well and badly, and while I think it’s good for that participation to be anonymous and generally shielded from retaliation, at a broader level, it is a job of the moderators to pay attention to unhealthy vote dynamics. We cannot police what you do with your votes, but if you do abuse your votes, you will make the site worse, and we might end up having to change the voting system towards something less expressive but more robust as a result.
These are general issues, but how do they relate to this whole Said banning thing?
Well, another important dimension of how the dynamics in these threads go is roughly the following:
Said makes a top-level comment asking a question or making some kind of relatively low-effort critique
This will most of the time get upvoted, because questions or top-level critiques rarely get downvoted
The discussion continues for 3-4 replies, and with each reply the number of users voting goes down
By the time the discussion is ~3 levels deep, basically all voting is done by two groups of people: LessWrong moderators and a very small set of LessWrong users who are extremely active and tend to take a strong interest in Said’s comments
If the LessWrong moderators do not vote, the author is now in a position where any further replies by Said get upvoted and their responses get reliably downvoted. If they do vote, the variance in voting in the thread quickly goes through the roof, resulting in lots of people with strongly upvoted or strongly downvoted comments, since everyone involved has very high vote-strength.
This is bad. The point of voting is to give an easy way of aggregating information about the quality and reception of content. When voting ends up dominated by a small interest[10] group without broader site buy-in, and with no one being able to tell that is what’s going on, it fails at that goal. And in this case, it’s distorting people’s perception about the site consensus in particularly high-stakes contexts where authors are trying to assess what people on the site think about their content, and about the norms of posting on LessWrong.
I don’t really know what to do about this. It’s one of the things that makes me more interested in bans than other things, since site-wide banning also comes with removal of vote-privileges, though of course we could also rate-limit votes or find some other workaround that achieves the same aim. I also am not confident this is really what’s going on as I do not look at vote data, and nothing here alone would make me confident I would want to ban anyone, but I think the voting has been particularly whack in a lot of these threads, and that seemed important to call out.
But why ban someone, can’t people just ignore Said?
I hope the dynamics I outlined help understand why ignoring Said is not usually a socially viable option. Indeed, Said himself does not think it’s a valid option:
What this de-facto means is that there is always an obligation by the author to respond to your comment, or otherwise be interpreted to be ignorant.
There is always an obligation by any author to respond to anyone’s comment along these lines. If no response is provided to (what ought rightly to be) simple requests for clarification (such as requests to, at least roughly, define or explain an ambiguous or questionable term, or requests for examples of some purported phenomenon), the author should be interpreted as ignorant. These are not artifacts of my particular commenting style, nor are they unfortunate-but-erroneous implications—they are normatively correct general principles.Many people don’t have the time, or find engaging with commenters exhausting
Then they shouldn’t post on a discussion forum, should they? What is the point of posting here, if you’re not going to engage with commenters?
this creates a default expectation that if they do not engage extensively with your comments in particular (with higher priority than anything else in the comment thread) there will be a public attack on them left unanswered.
This is only because most people don’t bother to ask (what I take to be) such obvious, and necessary, clarifying questions. (Incidentally, I take this fact to be a quite damning indictment of the epistemic norms of most of Less Wrong’s participants.) When I ask such questions, it is because no one else is doing it. I would be happy to see others do it in my stead.
distinguish between a question that is intended as a critique when left unanswered, and one that is an optional request for clarification
Viewing such clarifications as “optional” also speaks to an unacceptable low standard of intellectual honesty.
Once again: there is no confusion; there is no dichotomy. A request for clarification is neither an attack nor even a critique. The normal, expected form of the interaction, in the case where the original post is correct, sensible, and otherwise good (and where the only problem is an insufficiency in communicating the idea), is simply “[request for clarification] → [satisfactory clarification] → [end]”. Only a failure of this process to take place is in need of “defending”.[11]
Now, in the comment thread in which the comment above was made, both mods and authors have clarified that no, authors do not have an obligation to respond with remotely the generality outlined here, and the philosophy of discourse Said outlines is absolutely not site consensus. However, this does little for most authors. Most people have never read the thread where those clarifications were made, and never will. And even if we made a top-level post, or added a clarification to our new user guidelines, this would do little to change what is going on.
Because every time Said leaves a top-level comment, it is clear to most authors and readers that he is implying the presence of a social obligation to respond. And because of the dynamics I elaborated on in the previous section, it is not feasible for moderators or other users to point out the underlying dynamic each time, which itself requires careful compiling of evidence and pointing out (and then probably disputing) subtext.
So, despite it being close to site-consensus that authors do not face obligations to respond to each and every one of Said’s questions, on any given post, there is basically nothing to be done to build common knowledge of this. Said can simply make another comment thread implying that if someone doesn’t respond they deserve to be judged negatively, and there will always be enough people voting who have not seen this pattern play out, or even support Said’s relationship to author obligation against the broad site consensus, to make the question and critiques be upvoted enough. And so despite their snark and judgment they will appear to be made in good standing, and so deserve to be responded to, and the cycle will begin anew.
Now in order to fix this dynamic, the moderation team has made multiple direct moderation requests of Said, summarized here as a high-level narrative (though in reality it played out over more like a decade):
First we asked Said to please stop implying such obligations to authors, which Said rejected as a thing he was doing or a coherent ask[12]
So eventually we encouraged authors to self-moderate the comments on their posts more[13], under supervision of LessWrong moderators, allowing at least individual authors to stop Said from commenting if they didn’t want to engage with him
To which Said responded by claiming that anyone who dared to use moderation tools on their posts could only possibly do so for bad-faith reasons and should face social punishment for doing so
To which we responded by telling Said to please let authors moderate as they desire and to not do that again, and gave him a 3 month rate-limit
After the rate-limit he seemed to behave better for a few months, until he did the exact things we rate-limited him for again, again calling for authors to face censure because they used their moderation tools
So ultimately, what other option do we have but a ban? We could attach a permanent mark to Said’s profile with a link to a warning that this user has a long history of asking heckling questions and implying social punishment without much buy-in for that, but that seems to me to create more of an environment of ongoing hostility, and many would interpret such a mark as cruel to have imposed.
So no, I think banning is the best option available.
Ok, but shouldn’t there be some kind of justice process?
I expect it to be uncontroversial to suggest that most moderation on LessWrong should be soft-touch. The default good outcome of a moderator showing up on a post is to leave a comment warning of some bad conversational pattern, or telling one user to change their behavior in some relatively specific way. The involved users take the advice, the thread gets better or ends, and everyone moves on with their day.
Ideally this process starts and completes within 10 minutes, from the moderator noticing something going off the rails to sending off the comment providing either advice or a warning.
However, sometimes these interactions escalate, or moderators notice a more systematic pattern with specific users causing repeated problems, and especially if someone disagrees with the advice or recommendation of a moderator, it’s less clear how things are supposed to proceed. Historically the moderation standard for LessWrong has been unilateral dictatorship awarded to a head admin. But even with that dictatorship being granted to me, it is still up to me to decide how I want myself and the LessWrong team to handle these kinds of cases.
At a high-level I think there are two reasonable clusters of approaches here:
High-stakes decisions get made by some kind of court that tries to recruit impartial judges to a dispute. The judges get appointed by the head moderator, but have a tenure and substantial independent power.
High-stakes decisions get made directly by the acting head-moderator, who takes personal responsibility for all decisions on the site
(In either case it would make sense to try to generalize any judgments or decisions made into meaningful case-law to serve as the basis for future similar decisions, and to be added to some easily searchable set of past judgments that users and authors can use to gain more transparency into the principles behind moderation decisions, and predict how future decisions are likely to be made.)
Both of these approaches are fairly common in general society. Companies generally have a CEO who can unilaterally make firing and rule-setting decisions. Older institutions and governmental bodies often tend to have courts or committees. I considered for quite a while whether as part of this moderation decision I should find and recruit a set of more impartial judges to make high-stakes decisions like this.
But after thinking about it for a while, I decided against it. There are a few reasons:
Judging cases like this is hard and takes a lot of work. It also benefits from having spent lots of time thinking about moderation and incentives and institutional design. Recruiting someone good for this role would require paying them a bunch for their high opportunity cost, and is also just kind of objectively hard.
Incentivizing strong judge performance seems difficult. The default outcome I’ve seen from committees and panels is that everyone on them half-asses their job, because they rarely have stake in the outcome being good. Even if someone cares about LessWrong, that is not the same as being generally held personally responsible for it, and having your salary and broader reputation depend on how well LessWrong is going.
I think there is a broader pattern in society of people heavily optimizing for “defensibility”, and this mostly makes things worse. I think most of the reason for the popularity of committees and panels and boards deciding high-stakes matters is that this makes the decision harder to attack externally, by creating a veneer of process. Blankfacedness as Scott Aaronson described it is also a substantial part of this. I do not want LessWrong to become a blankfaced bureaucracy that hands down judgment from on high. Whenever it’s possible I would like people to be able to talk to someone who is directly responsible for the outcome of the decision, who could change their mind in that very conversation if presented with compelling evidence.
Considerations like these are what convinced me that even high-stakes decisions like this should be made on my own personal conscience. I have a stake in LessWrong going well, and I can take the time to give these kinds of decisions the resources they deserve to get made well, and I can be available for people to complain and to push on if people disagree with them.
But in doing so, I do want to do a bunch of stuff that gets us the good parts of the more judicially oriented process. Here are some things that I think make sense, even in a process oriented around personal responsibility:
I think it’s good to publish countervailing opinions when possible. I frequently expect people on the Lightcone team to disagree with decisions I make, and when that happens, I will encourage them to write up their perspective and serve as a record that will make it easier to spot broader blind spots in my decision-making (and also reduce gaslighting dynamics where people feel forced to support decisions I make out of fear of being retaliated against).
I think it’s good to have a cool-off period before making high-stakes moderation decisions. I expect to basically never make a ban decision on any contributor with a substantial comment history without taking at least a few days to step away from any active discussion to think about it. This doesn’t mean that a ban decision might not be preceded by a heated discussion (I think it’s usually good to give whoever is being judged the chance to defend themselves, and those discussions might very well turn out heated), but the basic shape of the decision will be formed away from any specific heated thread.
I want to make it easier to find past moderation decisions the moderators made and for people to form a sense of the case-law of the site. As part of me working on this post I’ve started on a redesign of the /moderation page to make it substantially easier to see a history of moderation comments, as well as to include an overview over the basic principles of how moderation is done on the site.
I think I should very rarely[14] prevent whoever is being affected by a moderation decision from defending themselves on LessWrong. For practical reasons, I need to limit the time I personally spend replying and engaging with their defense, but I think it makes sense to allow the opposing side in basically all cases like this to publish a defense on the site, and do at least some to signal-boost it together with any moderation decisions.
So what options do I have if I disagree with this decision?
Well, the first option you always have, and which is the foundation of why I feel comfortable governing LessWrong with relatively few checks and balances, is to just not use LessWrong. Not using LessWrong probably isn’t that big of a deal for you. There are many other places on the internet to read interesting ideas, to discuss with others, to participate in a community. I think LessWrong is worth a lot to a lot of people, but I think ultimately, things will be fine if you don’t come here.
Now, I do recommend that if you stop using the site, you do so by loudly giving up, not quietly fading. Leave a comment or make a top-level post saying you are leaving. I care about knowing about it, and it might help other people understand the state of social legitimacy LessWrong has in the broader world and within the extended rationality/AI-Safety community.
Of course, not all things are so bad as to make it the right choice to stop using LessWrong altogether. You can complain to the mods on Intercom, or make a shortform, or make a post about how you disagree with some decision we made. I will read them, and there is a decent chance we will respond or try to clarify more or argue with you more, though we can’t guarantee this. I also highly doubt you will end up coming away thinking that we are right on all fronts, and I don’t think you should use that as a requirement for thinking LessWrong is good for the world.
And if the stakes are even higher, you can ultimately try to get me fired from this job. The exact social process for who can fire me is not as clear to me as I would like, but you can convince Eliezer to give head-moderatorship to someone else, or convince the board of Lightcone Infrastructure to replace me as CEO, if you really desperately want LessWrong to be different than it is.
But beyond that, there is no higher appeals process. At some point I will declare that the decision is made, and stands, and I don’t have time to argue it further, and this is where I stand on the decision this post is about.
An overview over past moderation discussion surrounding Said
I have tried to make this post relatively self-contained and straightforward to read, trying to avoid making you the reader feel like you have to wade through 100,000+ words of previous comment threads to have any idea what is going on[15], at least from my perspective. However, for the sake of completeness, and because I do think it provides useful context for the people who want to really dive into this kind of decision, here is a quick overview over past moderation discussion and decisions related to Said:
8 years ago, Elizabeth wrote a moderation warning to Said, and there was some followup discussion with him on Intercom. With habryka’s buy-in, Elizabeth said roughly “if you don’t change your behavior in some way you’ll be banned”, and Said said roughly “it’s not worth it for me to change my behavior, I would rather not participate on LW at all in that case”[16]. He did not change his behavior, we did not end up banning him at this time, and he also did not stop participating on LW.
7 years ago Ben Pace wrote a long moderation warning to Said, in a thread (with ~32,000 words in it) that expanded under Said’s comments on a post.
6 years ago on New Year’s Eve/New Year’s Day, in a thread (with ~35,000 words in it), I wrote ~40 comments arguing with Said’s that his commenting style was corrosive to a good commenting culture on LessWrong.
2 years ago there was a conflict between Said Achmiz and Duncan Sabien that spanned many posts and comment threads and led Raymond Arnold to write a moderation post about it that received 560 (!) comments, or ~64,000 words. Ray took the mod action of giving Said a rate-limit of 3-comments-per-post-per-week, for a duration of 3 months.
Last month, a user banned Said from commenting on his posts. Said took to his own shortform where (amongst other things) he and others called that author a coward for banning him. Then Wei Dai commented on a 2018 post, that announced that authors (over a certain karma threshold) have the ability to ban users from their posts, to argue against this being allowed, under which a large thread developed again defending Said from being banned from that user’s posts, a thread of ~61,000 words.
The most substantial of these is Ray’s moderation judgement from two years ago. I would recommend the average reader not read it all, but it is the result of another 100+ hour effort, and so does contain a bunch of explanation and context. You can read through the comments Ray made in the appendix to this post.
What does this mean for the rest of us?
My current best guess is that not that much has to change. My sense is Said has been a commenter with uniquely bad effects on the site, and while there are people who are making mistakes along similar lines, there are very few who are as prolific or have invested as much into the site. I think the most likely way I can imagine the considerations in this post resulting in more than just banning Said is if someone decides to intentionally pick up the mantle of Said Achmiz in order to fill the role that they perceive he filled on the site, and imitate his behavior in ways that recreate the dynamics I’ve pointed out.[17]
There are a few users who I have similar concerns about as I had about Said, and I do want this post to save me effort in future moderation disputes. I do also expect to refer back to the ideas in this post for many years in various moderation discussions and moderation judgments, but don’t have any immediate instances of that in mind.
I do think it makes sense to try to squeeze out some guidance for future moderation decisions out of this. So in case-law fashion, here are some concrete guidelines derived from this case:
You are at least somewhat responsible for the subtext other people read into your comments, you can’t disclaim all responsibility for that
Sometimes things we write get read by other people to say things we didn’t mean. Sometimes we write things that we hope other people will pick up, but we don’t want to say straight out. Sometimes we have picked up patterns of speech or metaphors that we have observed “working”, but that actually don’t work the way we think they do (like being defensive when we get negative feedback resulting in less negative feedback, which one might naively interpret as being assessed less negatively).
On LessWrong, it is okay if an occasional stray reader misreads your comments. It is even okay if you write a comment that most of the broad internet would predictably misunderstand, or view as some kind of gaffe or affront. LessWrong has its own communication culture.
But if a substantial fraction of other commenters consistently interpret your comments to mean something different than what you claim they say when asked for clarification, especially if they do so in contexts where that misinterpretation happens to benefit you in conflicts you are involved in, then that is a thing you are at least partially responsible for.[18]
This all also intersects with “decoupling vs. contextualizing” norms. A key feature of LessWrong is that people here tend to be happy to engage with any specific object-level claim, largely independently of what the truth of that claim might imply at a status or reputation or blame level about the rest of the world. This, if you treat it as a single dimension, puts LessWrong pretty far into having “decoupling” norms . I think this is good and important and a crucial component of how LessWrong has maintained its ability to develop important ideas, and help people orient to the world.
This intersection produces a tension. If you are responsible for people on LessWrong reading context and implications and associations into your contributions you didn’t intend, then that sure sounds like the opposite of the kind of decoupling norms that I think is so important for LessWrong.
I don’t have a perfect resolution to this. Zack had a post on this with some of his thoughts that I found helpful:
I argue that, at best, this is a false dichotomy that fails to clarify the underlying issues—and at worst (through no fault of Leong or Nerst), the concept of “contextualizing norms” has the potential to legitimize derailing discussions for arbitrary political reasons by eliding the key question of which contextual concerns are genuinely relevant, thereby conflating legitimate and illegitimate bids for contextualization.
Real discussions adhere to what we might call “relevance norms”: it is almost universally “eminently reasonable to expect certain contextual factors or implications to be addressed.” Disputes arise over which certain contextual factors those are, not whether context matters at all.
The standard academic account explaining how what a speaker means differs from what the sentence the speaker said means, is H. P. Grice’s theory of conversational implicature. Participants in a conversation are expected to add neither more nor less information than is needed to make a relevant contribution to the discussion.
I disagree with Zack that the dichotomy between decoupling and contextualizing norms fails to clarify any of the underlying issues. I do think you can probably graph communities and spaces pretty well on a vector from “high decoupling” to “high contextualizing”, and this will allow you to make a lot of valid predictions.
But as Zack helpfully points out here, the key thing to understand is that of course many forms of context and implications are obviously relevant and important, and worth taking into account during a conversation. This is true on LessWrong as well as anywhere else. If your comments have a consistent subtext of denigrating authors who invoke reasoning by analogy, because you think most people who reason by analogy are confused (a potentially reasonable if contentious position on epistemics), then you better be ready to justify that denigration when asked about it.
Responses of the form “I am just asking these people for the empirical support they have for their ideas, I am not intending to make a broader epistemological point” are OK if they reflect a genuine underlying policy of not trying to shift the norms of the site towards your preferred epistemological style, and associated (bounded) efforts to limit such effects when asked. If you do intend to shift the norms of the site, you better be ready to argue for that, and it is not OK to follow an algorithm that is intending to have a denigrating effect, but that shields itself from the need for justification or inspection by invoking decoupling norms. What work is respected and rewarded on LessWrong is of real and substantial relevance to the participants of LessWrong. Sure, obsession with that dimension is unhealthy for the site, and I think it’s actively good to most of the time ignore it. But, especially if the subtext is repeated across many comments from the same author, it is the kind of thing that we need to be able to talk about, and sometimes moderate.
As as such, within these bounds, “tone” is very much a thing the LessWrong moderators will pay attention to, as are the implied connotations of the words you use, as are the metaphors you choose, and the associations that come with them. And while occasionally a moderator will take the effort to disentangle all your word choices, and pin down in excruciating detail why something you said implied something else and how you must have been aware of that on some level given what you were writing, they do not generally have the capacity to do so in most circumstances. Moderators need the authority to, at some level, police the vibe of your comments, even without a fully mechanical explanation of how that vibe arises from the specific words you chose.
Do not try to win arguments by fights of attrition
A common pattern on the internet is that whoever has the most patience for re-litigating and repeating their points ultimately wins almost any argument. As long as you avoid getting visibly frustrated, or insulting your opponents, and display an air of politeness, you can win most internet arguments by attrition. If you are someone who might have multiple hours per day available to write internet comments, you can probably eke out some kind of concession or establish some kind of norm in almost any social space, or win some edit war that you particularly care about.
This is a hard thing to combat, but the key thing that makes this tactic possible is to be in a social space in which it is assumed that comments or questions are made in good standing as long as they aren’t obviously egregious.
On LessWrong, if you make a lot of comments, or ask a lot of questions, with a low average hit-rate on providing value by the lights of the moderators, my best guess is you are causing more harm than good, especially if many of those comments are part of conversations that try to prove some kind of wrongdoing or misleadingness on behalf of your interlocutor (so that they feel an obligation to respond). And this is a pattern we will try to notice and correct (while also recognizing that sometimes it is worth pressing people on crucial and important questions, as people can be evasive and try to avoid reasonable falsification of their ideas in defense of their reputation/ego).
Building things that help LessWrong’s mission will make it less likely you will get banned
While the overall story of Said is one of him ultimately getting banned from LessWrong, it is definitely the case that having built readthesequences.com and greaterwrong.com and his contributions to gwern.net have all increased the thresholds we had for banning very substantially.
And I overall stand behind this choice. Being banned from LessWrong does affect people’s ability to contribute and participate in the broader Rationality community ecosystem, and I think it makes sense to tolerate people’s weaknesses in one domain, if that allows them to be a valuable contributor in another domain, even if those two domains are not necessarily governed by the same people.
So yeah, I do think you get to be a bit more of a dick, for longer, if you do a lot of other stuff that helps LessWrong’s broader mission. This has limits, and we will invest a bunch into limiting the damage or helping you improve, but it does also just help.
So with all that Said
And so we reach the end of this giant moderation post. I hope I have clarified at least my perspective on many things. I will aim to limit my engagement with the comments of this post to at most 10 hours. Said is also welcome to send additional commentary to me in the next 10 days, and if so, I will append it to this post and link to it somewhere high up so that people can see it if they get linked here.[19] I will also make one top-level comment below this post under which Said will be allowed to continue commenting for the next 2 weeks, and where people can ask questions.
Farewell. It’s certainly been a ride.
Appendix: 2022 moderation comments
In 2022 Ray wrote 10,000+ words the last time we took moderation action on Said, which I extracted here for convenience. I don’t recommend the average reader read them all, but I do think they were another high-effort attempt at explaining what was going on.
Overview/outline of initial comment
Okay, overall outline of thoughts on my mind here:
What actually happened in the recent set of exchanges? Did anyone break any site norms? Did anyone do things that maybe should be site norms but we hadn’t actually made it an explicit rule and we should take the opportunity to develop some case law and warn people not to do it in the future?
5 years ago, the moderation team has issued Said a mod warning about a common pattern of engagement he does that a lot of people have complained about (this was operationalized as “demanding more interpretive labor than he has given”). We said if he did it again we’d ban him for a month. My vague recollection is he basically didn’t do it for a couple years after the warning, but maybe started to somewhat over the past couple years, but I’m not sure (I think he may have not done the particular thing we asked him not to, but I’ve had a growing sense his commenting is making me more wary of how I use the site). What are my overall thoughts on that?
Various LW team members have concerns about how Duncan handles conflict. I’m a bit confused about how to think about it in this case. I think a number of other users are worried about this too. We should probably figure out how we relate to that and make it clear to everyone.
It’s Moderation Re-Evaluation Month. It’s a good time to re-evaluate our various moderation policies. This might include “how we handle conflict between established users”, as well as “are there any important updates to the Authors Can Moderate Their Posts rules/tech?
It seems worthwhile to touch on each of these at least somewhat. I’ll follow up on each topic at least somewhat.
Recap of mod team history with Said Achmiz
First, some background context. When LW2.0 was first launched, the mod team had several back-and-forths with Said over complaints about his commenting style. He was (and I think still is) the most-complained-about LW user. We considered banning him.
Ultimately we told him this:
As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.
I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that’s fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we’ve not reached there you’re correct to be worried and want to enforce the standards yourself with low-effort comments (and I don’t mean to imply the comments don’t often contain implicit within them very good ideas).
But unfortunately, given your low-effort criticism feels so aggressive (according to me, the mods, and most writers I talk to in the rationality community), this is just going to destroy the first stage before we get the second. If you write further comments in this pattern which I have pointed to above, I will not continue to spend hours trying to pass your ITT and responding; I will just give you warnings and suspensions.
I may write another comment in this thread if there is something simple to clarify or something, but otherwise this is my last comment in this thread.
Followed by:
This was now a week ago. The mod team discussed this a bit more, and I think it’s the correct call to give Said an official warning (link) for causing a significant number of negative experiences for other authors and commenters.
Said, this moderation call is different than most others, because I think there is a place for the kind of communication culture that you’ve advocated for, but LessWrong specifically is not that place, and it’s important to be clear about what kind of culture we are aiming for. I don’t think ill of you or that you are a bad person. Quite the opposite; as I’ve said above, I deeply appreciate a lot of the things you’ve build and advice you’ve given, and this is why I’ve tried to put in a lot of effort and care with my moderation comments and decisions here. I’m afraid I also think LessWrong will overall achieve its aims better if you stop commenting in (some of) the ways you have so far.
Said, if you receive a second official warning, it will come with a 1-month suspension. This will happen if another writer has an extensive interaction with you primarily based around you asking them to do a lot of interpretive labour and not providing the same in return, as I described in my main comment in this thread.
I do have a strong sense of Said being quite law-abiding/honorable about the situation despite disagreeing with us on several object and meta-level moderation policy, which I appreciate a lot.
I do think it’s worth noting that LessWrong 2.0 feels like it’s at a more stable point than it was in 2018. There’s enough critical mass of people posting here I that I’m less worried about annoying commenters killing it completely (which was a very live fear during the initial LW2.0 revival)
But I am still worried about the concerns from 5 years ago, and do basically stand by Ben’s comment. And meanwhile I still think Said’s default commenting style is much worse than nearby styles that would accomplish the upside with less downside.
My summary of previous discussions as I recall them is something like:
Mods: “Said, lots of users have complained about your conversation style, you should change it.”
Said: “I think a) your preferred conversation norms here don’t make sense to me and/or seem actively bad in many cases, and b) I think the thing my conversation style is doing is really important for being a truthtracking forum.”
[...lots of back-and-forth...]
Mods: ”...can you change your commenting style at all?”
Said: “No, but I can just stop commenting in particular ways if you give me particular rules.”
Then we did that, and it sorta worked for awhile. But it hasn’t been wholly satisfying to me. (I do have some sense that Said has recently ended up commenting more in threads that are explicitly about setting norms, and while we didn’t spell this out in our initial mod warning, I do think it is extra costly to ban someone from discussions of moderation norms than from other discussion. I’m not 100% sure how to think about this)
Death by a thousand cuts and “proportionate”(?) response
A way this all feels relevant to current disputes with Duncan is that thing that is frustrating about Said is not any individual comment, but an overall pattern that doesn’t emerge as extremely costly until you see the whole thing. (i.e. if there’s a spectrum of how bad behavior is, from 0-10, and things that are a “3” are considered bad enough to punish, someone who’s doing things that are bad at a “2.5″ or “2.9” level don’t quite feel worth reacting to. But if someone does them a lot it actually adds up to being pretty bad.
If you point this out, people mostly shrug and move on with their day. So, to point it out in a way that people actually listen to, you have to do something that looks disproportionate if you’re just paying attention to the current situation. And, also, the people who care strongly enough to see that through tend to be in an extra-triggered/frustrated state, which means they’re not at their best when they’re doing it.
I think Duncan’s response looks very out-of-proportion. I think Duncan’s response is out of proportion to some degree (see Vaniver thread for some reasons why. I have some more reasons I plan to write about).
But I do think there is a correct thing that Duncan was noting/reacting to, which is that actually yeah, the current situation with Said does feel bad enough that something should change, and it indeed the mods hadn’t been intervening on it because it didn’t quite feel like a priority.
I liked Vaniver’s description of Duncan’s comments/posts as making a bet that Said was in fact obviously banworthy or worthy of significant mod action, and that there was a smoking gun to that effect, and if this was true then Duncan would be largely vindicated-in-retrospect.
I’ll lay out some more thinking as to why, but, my current gut feeling + somewhat considered opinion is that “Duncan is somewhat vindicated, but not maximally, and there are some things about his approach I probably judge him for.”
Maybe explicit rules against blocking users from “norm-setting” posts.
On blocking users from commenting
I still endorse authors being able to block other users (whether for principles reasons, or just “this user is annoying”). I think a) it’s actually really important for authors for the site to be fun to use, b) there’s a lot of users who are dealbreakingly annoying to some people but not others. Banning them from the whole site would be overkill. c) authors aren’t obligated to lend their own karma/reputation to give space to other people’s content. If an author doesn’t want your comments on his post, whether for defensible reasons or not, I think it’s an okay answer that those commenters make their own post or shortform arguing the point elsewhere.
Yes, there are some trivial inconveniences to posting that criticism. I do track that in the cost. But I think that is outweighed by the effect on authors being motivated to post.
That all said...
Blocking users on “norm-setting posts”
I think it’s more worrisome to block users on posts that are making major momentum towards changing site norms/culture. I don’t think the censorship effects are that strong or distorting in most cases, but I’m most worried about censorship effects being distorting in cases that affect ongoing norms about what people can say.
There’s a blurry line here, between posts that are putting forth new social concepts, and posts advocating for applying those concepts towards norms (either in the OP or in the comments), and a further blurry line between that and posts which arguing about applying that to specific people. i.e. I’d have an ascending wariness of:
I think it was already a little sketchy that Basics of Rationalist Discourse went out of it’s way to call itself “The Basics” rather than “Duncan’s preferred norms” (a somewhat frame-control-y move IMO although not necessarily unreasonably so), while also blocking Zack at the time. It feels even more sketchy to me to write Killing Socrates, which AFAICT a thinly veiled “build-social-momentum-against-Said-in-particular”, where Said can’t respond (and it’s disproportionately likely that Said’s allies also can’t respond)
Right now we don’t have tech to unblock users from a specific post, who have been banned from all of a user’s posts. But this recent set of events has me learning towards “build tech to do that”, and then make it a rule that post over at the threshold of “Basics” or higher (in terms of site-norm-momentum-building), need to allow everyone to comment.
I do expect that to make it less rewarding to make that sort of post. And, well, to (almost) quote Duncan:
Put another way: a frequent refrain is “well, if I have to put forth that much effort, I’ll never say anything at all,” to which the response is often [“sorry I acknowledge the cost here but I think that’s an okay tradeoff”]
Okay but what do I do about Said when he shows up doing his whole pattern of subtly-missing/and/or/reframing-the-point-while-sprawling massive threads, in an impo
My answer is “strong downvote him, announce you’re not going to engage, maybe link to a place where you went into more detail about why if this comes up a lot, and move on with your day.” (I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways)
(I also kinda wish gjm had also done this towards the beginning of the thread on LW Team is adjusting moderation policy)
Verdict for 2023 Said moderation decisions
Preliminary Verdict (but not “operationalization” of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
credibly commit to changing their behavior in a fairly significant way,
or, accept some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior.
or, be banned from commenting on other people’s posts (but still allowed to make new top level posts and shortforms)
(After the two comments they can continue to PM the LW team, although we’ll have some limit on how much time we’re going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I’d be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of “we learned new useful things about site governance”) there’s a limit to how much it’s worth moderating or mediating conflict re: two particular users.
So, something pretty significant needs to change.
A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly support rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them.
I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.
I still don’t really know what to do, but I want to flag that the the goal I’ll be aiming for here is “make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can’t do the cluster of things they want to do that feels damaging to me.”
If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I’d be pretty sad to resort to that option). I have some hope tech-based solutions can work
(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There’s enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who’s more legitimately aggrieved this particular week)
Re: Said:
One of the most common complaints I’ve gotten about LessWrong, from both new users as well as established, generally highly regarded users, is “too many nitpicky comments that feel like they’re missing the point”. I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it’s still an important/valid complaint.
Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)
We’ve had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he’s sticking around, I think we’d need some kind of tech solution. The outcome I want here is that in practice Said doesn’t bother people who don’t want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I’m skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)
Here are a couple ideas:
Easily-triggered-rate-limiting. I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day. I expect fine-tuning this to actually work the way I imagine in my head is a fair amount of work but not that much.
Proactive warning. If a post author has downvoted Said comments on their post multiple times, they get some kind of UI alert saying “Yo, FYI, admins have flagged this user as somewhat with a pattern of commenting that a lot of authors have found net-negative. You may want to take that into account when deciding how much to engage”.
There’s some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of “authors can ban users” is worth revisiting so my first impulse is to avoid investing in it further until we’ve had some more top-level discussion about the feature.
Why is it worth this effort?
You might ask “Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?”. Here are some areas I think Said contributes in a way that seem important:
Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. (edit: as Ben Pace notes, this is pretty significant, and I agree with his note that “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”)
Most of his comments are in fact just pretty reasonable and good in a straightforward way.
While I don’t get much value out of protracted conversations about it, I do think there’s something valuable about Said being very resistant to getting swept up in fad ideas. Sometimes the emperor in fact really does have no clothes. Sometimes the emperor has clothes, but you really haven’t spelled out your assumptions very well and are confused about how to operationalize your idea. I do think this is pretty important and would prefer Said to somehow “only do the good version of this”, but seems fine to accept it as a package-deal.
Re: Duncan
I’ve spent years trying to hash out “what exactly is the subtle but deep/huge difference between Duncan’s moderation preferences and the LW teams.” I have found each round of that exchange valuable, but typically it didn’t turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.
I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.
Here’s this month/year’s stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to “knock it off” quickly. But,
a) moderation time is limited
b) even in the world where we massively invest in moderation… the thing Duncan cares most about moderating quickly just doesn’t seem like it should necessarily be at the top of the priority queue to me?
I was surprised and updated on You Don’t Exist, Duncan getting as heavily upvoted as it did, so I think it’s plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative).
I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven’t actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)
I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy), but don’t necessarily agree on how. It’d be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
I don’t know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false.
I continue to think it’s fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)
Some goals I’d have are:
people on LessWrong feel safe that they aren’t likely to get into sudden, protracted conflict with Duncan that persists outside his own posts.
the LessWrong team and Duncan are on-the-same-page about LW team not being willing to allocate dozens of hours of attention at a moments notice in the specific ways Duncan wants. I don’t think it’s accurate to say “there’s no lifeguard on duty”, but I think it’s quite accurate to say that the lifeguard on duty isn’t planning to prioritize the things Duncan wants, so, Duncan should basically participate on LessWrong as if there is, in effect “no lifeguard” from his perspective. I’m spending ~40 hours this week processing this situation with a goal of basically not having to do that again.
In the past Duncan took down all his LW posts when LW seemed to be actively hurting him. I’ve asked him about this in the past year, and (I think?) he said he was confident that he wouldn’t. One thing I’d want going forward is a more public comment that, if he’s going to keep posting on LessWrong, he’s not going to do that again. (I don’t mind him taking down 1-2 problem posts that led to really frustrating commenting experiences for him, but if he were likely to take all the posts down that undercuts much of the value of having him here contributing)
FWIW I do think it’s moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse and integrating it into our overall moderation policy. (It’s maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it’s kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it’s worth the LW team having a dedicated “our frame on what the site norms are” anyway)
In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.
I’ll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven’t really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here.
- ^
Why spend so much time engaging with a single commenter? Well, the answer is that I do think the specific way Said has been commenting on the site had a non-trivial chance of basically just killing the site, in the sense of good conversation and intellectual progress basically ceasing, if not pushed back on and the collateral damage limited by moderator action.
Said has been by far the most complained user on the site, with many top authors citing him as a top reason for why they do not want to post on the site, or comment here, and also I personally (and the LessWrong team more broadly) would have had little interest in further investing in LessWrong if the kind of the kind of culture that Said brings had taken hold here.
So the stakes have been high, and the alternative would have been banning, which I think also in itself requires at least many dozens of hours of effort, and given that Said is a really valuable contributor via projects like greaterwrong and readthesequences.com, a choice I felt appropriately hesitant about.
- ^
Now, one might think that it seems weird for one person to be able to derail a comment thread. However, I claim this is indeed the case. As long as you can make comments that do not get reliably downvoted, you can probably successfully cause a whole comment thread to almost exclusively focus on the concern you care about. This is the result of a few different dynamics:
Commenters on LW do broadly assume that they shouldn’t ask a question or write a critique that someone else has already asked, or has indirectly already been answered in a different comment (this is good, this is what makes comment sections on LW much more coherent than e.g. on HN)
LW is high enough trust that I think authors generally assume that an upvoted comment is made in good standing and as such deserves some kind of response. This is good, since it allows people to coordinate reasonably well on requesting justification or clarification. However, voting is really quite low-fidelity, it’s not that hard to write comments that will not get downvoted, and there is little consistent reputational tracking on LessWrong across many engagements, meaning it’s quite hard for anyone to lose good standing.
Ultimately I think it’s the job of the moderators to remove or at least mark commenters who have lost good standing of this kind, given the current background of social dynamics (or alternatively to rejigger the culture and incentives to remove this exploit of authors thinking non-downvoted comments are made in good standing)
- ^
Occupy Wall Street strikes me as another instance of the same kind of popular sneer culture. Occupy Wall Street had no coherent asks, no worldview that was driving their actions. Everyone participating in the movement seemed to have a different agenda of what they wanted to get out of it. The thing that united them was a shared dislike of something in the vague vicinity of capitalism, or government, or the man, not anything that could be used as the basis for any actual shared creations or efforts.
- ^
To be clear, I think it’s fine and good for people to congratulate each other on getting new jobs. It’s a big life change. But of course if your discourse platform approximately doesn’t allow anything else, as I expand in the rest of this section, then you run into problems.
- ^
And… unfortunately… we are just getting started.
- ^
I am here trying to give a high-level gloss that tries to elucidate the central problems. Of course many individual conversations diverge, and there are variations on this, often in positive ways, but I would argue the overall tendency is strong and clear.
- ^
I switched to a different comment thread here, as a different thread made it easier to see the dynamics at hand. The Benquo thread also went meta, and you can read it here, but it seemed a bit harder to follow without reading a huge amount of additional context, and was a less clear example of the pattern I am trying to highlight.
- ^
See e.g. this comment thread with IMO a usually pretty good commenter who kept extremely strongly insisting that any analysis or evaluation of IMO clearly present subtext is fabricated or imagined.
- ^
I am not making a strong claim here that direct aggression is much worse or much better than passive aggression, I feel kind of confused about it, but I am saying that independently of that, there is one argument that passive/hidden aggression requires harsher punishment when prosecution does succeed.
- ^
Who to be clear, have contributed to the site in the past and have a bunch of karma.
- ^
For some further details on what Said means by “responding” see this comment.
- ^
A bit unclear what exactly happened, you can read the thread yourself. Mostly we argued for a long time about what kind of place LessWrong should be and how authors should relate to criticism until we gave an official warning, and then nothing about Said’s behavior changed in the future, but we didn’t have the energy to prosecute the subtext another time.
- ^
The functionality for this had been present earlier, but we hadn’t really encouraged people to use it.
- ^
Unless we are talking about weird issues that involve doxxing or infohazards or things like that.
- ^
Requiring you to read only 15,000 words of summary. :P
- ^
The full quote being:
Buuuut what’s going on here is that—and this is imo unfortunate—the website you guys have built is such that posting or commenting on it provides me with a fairly low amount of value
This is something I really do find disappointing, but it is what it is (for now? things change, of course)
So again it’s not that I disagree with you about anything you’ve said
But the sort of care / attention / effort w.r.t. tone and wording and tact and so on, that you’re asking, raises the cost of participation for me above the benefit
(Another aspect of this is that if I have to NOT say what I actually think, even on e.g. the CFAR thing w.r.t. Double Crux, well, again, what then is the point)
(I can say things I don’t really believe anywhere)
[...]
If the takeaway here is that I have to learn things or change my behavior, well—I’m not averse in principle to doing that ever under any circumstances, but it has to be worth my while, if you see what I mean
Currently it is not
I hope to see that change, of course!
The specific moderation message we sent at the end of that exchange was:
The mod team has talked about it, and we’re going to insist you comment with the same level of tact you showed while talking with me. If that makes it not worth your while to comment on the new LW that’s regrettable and we hope someday the quality makes it worth your while to come back on these terms, but we understand and there are no hard feelings.
- ^
To be clear, I think as I point out in the earlier sections of this post, I think there are ways of doing this that would be good for the site, and functions that Said performed that are good, but I would be quite concerned about people doing a cargo-culting thing here.
- ^
And to be clear, this is all a pretty tricky topic. It is not rare for whole social groups, including the rationality community, to pretend to misunderstand something. As the moderators it’s part of our job to take into account whether the thing that is going on here is some social immune reaction that is exaggerating their misunderstandings, or maybe even genuinely preventing any real understanding from forming at all, and to adjust accordingly. This is hard.
- ^
Like, with some reasonable limit around 3000 words or so.
- Which top authors did Said Achmiz drive away? by 2 Sep 2025 0:08 UTC; 47 points) (
- 4 Sep 2025 21:11 UTC; 14 points) 's comment on Sting’s Shortform by (
- 12 Sep 2025 21:50 UTC; 12 points) 's comment on AllAmericanBreakfast’s Shortform by (
- 21 Sep 2025 2:54 UTC; 5 points) 's comment on Obligated to Respond by (
- 17 Sep 2025 4:02 UTC; 0 points) 's comment on Obligated to Respond by (
Thank you Habryka (and the rest of the mod team) for the effort and thoughtfulness you put into making LessWrong good.
I personally have had few problems with Said, but this seems like an extremely reasonable decision. I’m leaving this comment in part to help make you feel empowered to make similar decisions in the future when you think it necessary (and ideally, at a much lower cost of your time).
It might even be too reasonable…as there’s no real limit on what site administrators can do to their own site, they can replace all of LW with a giant poop emoji if they really wanted to, so such enormously long elaborations might be counterproductive even for the intended purpose.
At least to me, a few paragraphs with flawless airtight logic is more genuinely convincing than dozens of paragraphs of less than airtight logic.
Speaking of which, I got the itch while writing this to add in an extra few sentences to elaborate in further detail… so there may be a subtle memetic effect too.
Edit: I seem to have attracted 4 random downvoters who appear too ashamed to even indicate a rationale. Which seems to indicate my comment touches upon something of substance.
This is not a strong argument. It is equally plausible that 4+ people think your post is simply bad and not worth the effort to criticize.
This is not meant as an attack on you, but I do think your post here is guilty of some of the same misbehaviour that the OP explains.
The quoted text wasn’t an argument, it doesn’t make sense to pretend it was…?
It’s clearly an edit to add in my own personal opinion that I wasn’t seeking an argument about.
And frankly, probably no one fully read all of habyrka’s post, including you. So it wouldn’t make sense at all.
Edit: I just realized that does imply the downvoters are also being mildly deceptive, since they would know they didn’t read the full text. So ironically it reinforces the original point in a counterintuitive way, and if you squint at it, it might imply an argument on the meta level of multiple deceivers roaming around… but then pretty much everyone who commented or voted would fall under suspicion too, so that would be a real stretch.
Double Edit: It’s somewhat of a startling implication, could literally everyone who voted under this post be behaving mildly deceptively? I didn’t even consider the possibility when I wrote the original comment but now am leaning towards that being the case, if typical forum norms of reading the full text are taken literally. Thanks for raising the unsettling point. I’ll take a bit of karma loss for that.
May I ask what your motivation was when you wrote and published this post of yours?
Were you trying to learn something? Or were you trying to teach me something? Or were you just responding to the knee-jerk impulse to win a fight online?
My post above was an attempt to teach you something. I hope that this wording does not come off as condescending; it is not meant as such. I am here on LessWrong primarily to learn. As such, I appreciate it when someone genuinely tries to teach me something. I hope that you will take it in the same spirit.
I think your first post above had some flaws in terms of rationality. I think your follow-up is even less rational.
Am I making sense? I might not be. I can try to be clearer, but only if you truly want to know what I am trying to say.
The motivation, after the double edit, is clearly to express suprise after connecting the dots and to enumerate it…
I wrote it in the most straightforward and direct manner possible?
After re-reading it twice, I get that clearly implicates you too, so I get why you may be upset.
But even if it might have been better worded given more time… by definition all commentators under a post at least potentially voted. So I don’t see how the implication could have been avoided entirely while still getting the gist across.
Sure, but do you need to express all your emotions?
In my experience (as a rough guideline), when I do something, it is either because I want to achieve some goal, or because I am in the grip of some subconscious impulse. The latter is something I want to catch and notice as often as I can, in order to learn to be more conscious and more rational as much of the time as possible.
Since you read and post on LessWrong, I assume that you want to learn to be more rational. Am I right?
I may have been expressing myself too vaguely. What I have been trying to say is this: I think that when you write these posts, you are in the grip of subconscious urges—presumably an urge to defend yourself and “win fights” in order to secure your social status. I am trying to convince you that you can train and improve your own rationality by introspecting more about why you do the things you do.
Is this a question? Or are you just defending yourself again?
not relevant to the larger discussion, but you wrote sentence i disagree with:
“when I do something, it is either because I want to achieve some goal, or because I am in the grip of some subconscious impulse”
well, in my model, i act to achieve goal of from some impulse. the impulsive doesn’t have to be subconscious. I don’t think acting on impulses is always bad or oppose rationality, in the same way that emotions are not always irrational.
I don’t see acting on an urge itslef bad. moreover, there are circumstances when i try to act on my impulses more rather then less!
so… what is the problem on acting on impulses, exactly? i don’t need to express all my emotions, but all else equal, consider it good thing to do that. because i want to. things are not else equal, but you need tactfully say what is the problem. writing comments because i want to (under some constrains) seem to me like the right thing to do.
(alas, my longer explanations on this are in Hebrew.
https://hadoveretharishona.wordpress.com/?p=7130 )
Fair point. It is possible to be conscious of an impulse and act on it even if it does not serve any particular goal. Let us rephrase:
When I do something, it is either because I want to achieve some goal, or because I am in the grip of some impulse.
Why do you do this? In order to achieve some goal?
“Why do you do this? In order to achieve some goal?”
my best answer to this is “because I want to”, but I mostly think it’s the wrong question. you are assuming that people do things to achieve goals, and I’m saying that achieving goals is not the only reason to do things, that “what goal it is achieving” is the wring level of meta to ask.
why do you think that the right sort of answers are in the forms of goals and not in the form of impulses?
there is a pattern when i want something, i experience urge or desire or want to do something or have something. then i act on that impulse/urge/desire. then i satisfied it, and i feel sated of happy or fulfilled. this is good!
the way i model such things, this is important part of what my non-existent Utility Function is. that is the first level. sometimes, acting on a urge does not fulfill it, or even anti-fulfill it. sometimes i want things that are actually to abstract of complicated to be described as urges i can act upon without planning in thinking. but this is the exception, not the rule.
as i see it, something like CEV work like that—do what i want, because i want that. encounter problems, or things that need planning. plan to solve the problems or plan to achieve goals.
but all the part of goals and planing is kicking on only as reaction to problems or wanting something in the form of result and not urge. having urge → acting on urge → being satisfied is the basic loop, the default that does not need explanation or justification.
while it’s look to me that you see having goal and act to achieve it as the basic loop that does not need explanation or justification.
so to try again answer to the question: “Why do you do this?”
because this is my Utility function. the urges are, to first approximation, my utility function. they are obviously not a function, but the way i will have some utility function, if humankind survive, if i will get to live long enough, is by weaving together the different things that i want. and part of it is goal-shaped, part of it is in the form of “i want the world be in that state”. but a lot of it is in the form “i want to do x”.
there is important difference, in my ontology, between wants in the form “i want the world be in x state”, as in—i want the dished to be washed, i want the food be prepare, i want my home be clean, and “i want do x”. i want to play the video game, to read the post, to read the book, to eat the tasty food, to listen to music, to go to a walk.
you can translate that to goal-framing by saying that my goal is the experience of walking or the pleasant sensation, but i think that translation lose something important, and that it’s the wrong framing.
What is CEV?
That is how I would explain it.
Can you please elaborate on what important thing you think is lost?
Coherent Extrapolated Volition
this look to me like failure in Noticing Frame Differences
imagine i try to explain some proof in geometry to my hypothetical friend that think in feeling. I’m trying to explain her congruent triangles. and she replay to me—“so if there are three things that same you feel like it’s the same triangle, but you need to have at least one side, because angles doesn’t feel real enough to you?”
and, like, this is not wrong description, per se. she will be able to recognize congruent triangles. but I still notice that it doesn’t look like she understand the concept of proof at all!
I can describe the different predictions that I can make when i say something is a urge or a goal—when it’s urge i can’t fail to get it, i do the thing i want to do, and feel satisfied. while in goal i want to change the world state, and i can try to achieve the goal, and fail, and be unsatisfied. but this only can explain why i think there are two clusters here, and it’s not what I’m trying to do.
but what I’m actually trying to do is to connect personal experience with words. didn’t you ever feel the impulse to do something, and then did that, and then turned out the result is not what you wanted and was disappointed? didn’t you ever have the impulse to do something, and did that, and was satisfied, despite the result wasn’t what you ostensibly want?
those are the words that i use to describe this two experiences of mine.
This is not my experience. When I act on an urge, I do not necessarily feel satisfied. There is generally some pleasure associated with the act, but it can be extremely fleeting and short-lived.
This fleeting pleasure is better than nothing, and I will often act on an urge in order to get this feeling. But after the feeling has passed, I do not feel satisfied.
I only feel satisfied after I have accomplished something that feels valuable—a goal.
Interesting! I wonder to what extend we are different physiologically, to what extend we use different words to describe same experiences, and to what extend our opinions on things shapes our experiences. alas, we don’t have a way to communicate our feelings directly, yet. and I honestly have no idea how to check.
Well, I am somewhat anhedoniac by nature. There are a lot of positive experiences which many (most?) people report and which I do not recognize. For example, the sunset does nothing for me. Sex has its moments but is overall disappointing and a far cry from its reputation. Live concerts are described by some as borderline religious experiences; for me they are cool and fun but nothing really exceptional.
Fortunately, my Buddhist-inspired meditation practice is helping me discover more joy in life.
This doesn’t make sense as a reply…
How is your opinion on perceived emotional expressiveness even relevant to the prior comment ?
Let me ask you just one question: Do you truly want to learn to be more rational?
Please give me a direct answer to this.
This comment is utterly incompressible and full of baseless accusations. I will now downvote it. Am I behaving deceptively? How about if I had silently downvoted it? No.
How can your opinion even affect the probability of deception in the first place? It seems incapable of moving the needle in that way, so I don’t see the logical connection.
By definition, deception means that there might be some pretense/ulterior motives/deflection/tricks/etc… behind the face value reading of your comments.
I did actually read all of the post. it was interesting read. the claim that “probably no one fully read all of habyrka’s post” looks to me as example of Typical Mind Fallacy, and one that reflects poorly on you.
I also updated toward the possibility i made the same mistake, and i should stop assuming that 90%+ the commenters read the post. thank you for that.
I hereby voice strong approval of the meta-level approaches on display (being willing to do unpopular and awkward things to curate our walled garden, noticing that this particular decision is worth justifying in detail, spending several thousand words explaining everything out in the open, taking individual responsibility for making the call, and actively encouraging (!) anyone who leaves LW in protest or frustration to do so loudly), coupled with weak disapproval of the object-level action (all the complicating and extenuating factors still don’t make me comfortable with “we banned this person from the rationality forum for being annoyingly critical”).
If I were a moderator, I would have banned Jesus Christ Himself if He required me to spend one hundred hours moderating His posts on multiple occasions. Given your description here I am surprised you did not do this a long time ago. I admire your restraint, if not necessarily your wisdom.
I know what you mean, of course, but it is funny that you use Jesus as an example of someone unlikely to be banned when, historically, Jesus was in fact “banned”. :)
Fwiw I’ve found Said’s comments to be clear, crisp and valuable. I don’t recall being ever annoyed by his comments and found him a most useful bloodhound for bad epistemic practices and rhetorics. Many cases that Said’s comment is the only good, clear, and crisp critic of vagueposting and applauselighting.
The examples in this post don’t seem compelling at all. One of the primary examples seems to be Duncan who comes off [from a distance] as thin-skinned and obscurantist, emotionally blowing up at very fair criticism.
Despite my disagreement I endorse Habryka unilaterally taking these kinds of decisions and approve of his transparency and conduct in this matter.
Farewell, lesswrong gadfly. You will be missed.
This is my view too. I remember once trying (I think on Facebook) to gently talk him out of being really angry at someone for making what I thought was a reasonable criticism, and he ended up getting mad at me too.
I don’t think I link to a single Duncan/Said interaction in any of the core narratives of the post. I do link the moderation judgement of the previous Said/Duncan thread, but it’s not the bulk of this post.
Like none of these comments:
link to any threads between Said and Duncan.
And the moderation judgement in the Said/Duncan also didn’t really have much to do with Said’s conduct in that thread, but with his conduct on the site in general.
You might still not find the examples compelling, but there is basically no engagement with Duncan that played any kind of substantial role in any of this.
As another outside observer I also got the impression that the Duncan conflict was the most significant of the ones leading up to the ban, since he wrote a giant post advocating for banning Said, left the site in a huff shortly thereafter, and seems to be the main example of a top contributor by your lights who said they didn’t post due to Said.
Nah, you can see in the moderation history that we threatened Said with bans and moderation actions for many years before then. My honest best guess is that we would have banned Said somewhat earlier if not for the Duncan thread, though that we also wouldn’t have given him a rate-limit around that time, but it’s of course hard to tell.
My experience was that Said’s behavior in the Duncan thread was among the most understandable cases of him behaving badly (because I too have found myself ending up drawn into conflicts with Duncan that end up quite aggressive and at least tempt me to behave badly). That’s part why I don’t link to any comments of his in the thread above (I might somewhere in there, but if so it’s not intended as a particularly load-bearing part of the case).
I should comment publicly on this; I’ve talked with various people about it extensively in private. In case you just want my conclusion before my reasoning, I am sad but weakly supportive. An outline of six points, which I will maybe expand on if people ask questions:
I should link some previous writing of mine that wasn’t about Said:
When discussing the death of Socrates, I think it’s plausible that Socrates ‘had it coming’ because he was attacking the status-allocation methods of Athens, which were instrumental in keeping the city alive. That is, the ‘corrupting the youth’ charge might have been real and the sort of thing that it makes sense to watch out for.
In response to Zack’s post Lack of Social Grace is an Epistemic Virtue, I responded that the Royal Society didn’t think so. It seems somewhat telling to me that academic culture—the one I call “scholarly pedantic argumentative culture”, predates good science by a long time, and is clearly not sufficient to produce science. You need something more, and when I read about the Royal Society or the Republic of Letters I get a sense that they took worldly considerations seriously. They were trying to balance epistemic and instrumental rationality, in a way oft discussed on LW.
I don’t think I ever had a personal problem with Said or his comments; I generally found them easy to read and easy to respond to.
(See, for example, this recent exchange, where Said asked a clarifying question that I found easy to answer. The conversation continued from there—my comment was unfortunately a bit confusing—but I liked Gordon’s comment that ended the conversation.)
In particular, I was defensive of Said during the moderation discussion which involved him responding to my post (see this comment for me responding to habryka about how I read Said’s comment), and have generally been defensive of Said in other moderation discussions. I think it is important that LessWrong not lose sight of core rationalist virtues, and not fall into the LinkedIn attractor.
In particular I think we should keep as a live hypothesis “this person finds Said’s comments annoying because they are directing attention at the hole in their argument.” As someone who likes finding the hole in their argument and then developing the argument further, this never annoyed me. But this isn’t the only hypothesis and I think often Said or Zack acted as tho it was.
I am the person that caused Duncan to crystallize the concept of ‘emotionally tall’ discussed here (ctrl-f for it, you don’t have the read the rest of the post). In many ways this is good for a moderator—my skin is thick enough that I don’t have to worry about many interactions—but in some ways it is bad (in that behavior which is driving people away doesn’t bother me personally, and I need to be deliberately ‘looking out for’ users in a way that I don’t have to look out for myself). I think I used to view this as “a rationalist virtue I possess” and now I view it more as “an incidental fact about me”—like, my IQ is also relevant to my rationality but it’s not really a “rationalist virtue” instead of the amount of horsepower that I’m working with.
I think cultural effects are important and Said’s case merited this much time and attention.
I really do think Said has positive effects and has put nontrivial effort into making rationality broadly available and am sad to see him go.
I also think Said has negative effects and am hopeful about seeing what happens on LessWrong without him.
I had hoped we would get to ground on some of the disagreements on cultural preferences and values, but what happened was mostly Said and Zack laid out their models, and Oli and the mod team laid out their models, and I don’t think we ever successfully identified cruxes for both sides. Like, I’m still not sure what Zack or Said think of the Royal Society example; Zack talks about it a bit in another comment on that page but not in a way that feels connected to the question of how to balance virtues against each other, and what virtues cultures should strive towards. (Said, in an email, strongly rejects my claim that there’s a difference between his culture of commenting and the Royal Society culture of commenting that I describe.)
I think early LessWrong was very focused on biases / the psychological literature on irrationality and formulating defenses against those things. I think in that framing, pointing out that something is impeding the flow of information is almost enough to end the conversation on its own. I think Said and Zack were pretty easily able to point to “and this is how your proposal blocks some information flow that is good.”
I think later LessWrong was more focused on holistic / integrated approaches. Asking “what would the Bayesian superintelligence do in this situation?” is a pretty different question from “am I running afoul of my checklist of biases?”, altho often it involves checking your checklist of biases. A master carpenter still uses their string.
In some ways, this actually reminds me of the EDT-CDT-FDT progression in decision theory. EDT considers all evidence, which causes it to make some dumb mistakes. CDT rules out a class of evidence, which avoids EDT’s dumb mistakes, but causes some subtle mistakes. FDT rules back in a narrower category of the evidence that CDT rules out, which avoids those subtle mistakes. But from CDT’s perspective, the evidence that FDT is ruling back in is illegitimate, and it’s a mistake to return to superstition.
I basically agree with Said’s view that this is a principles disagreement, and it’s jumping ahead of ourselves to simply declare that “our view is more sophisticated than yours; you don’t understand ours.”
Nevertheless I do believe that our view is more sophisticated, and I operationalize this by something like ITT-passing; I think it’s generally the case that I can see the thing Said or Zack is pointing out, and in the reverse direction I mostly get the sense that they rarely see the criticism, and if they do, it’s only as something that seems fundamentally illegitimate to them. Critic Contributions are Logically Irrelevant is a crisp example of this, I think; people often raise objections about commenters that don’t make sense as logical criticisms. But if they aren’t intended as logical criticisms, that seems irrelevant to me. (Perhaps it is worth rereading Feeling Rational.)
I think this took so long because the balance of positives and negatives was so close, and so we were ambivalent for a long time.
I suggested running an Athenian ostracism process or similar. I think it’s maybe worth public discussion of whether or not that would have been better?
This is like Alicorn’s ‘modularity’ proposal, but different. Whereas that one rested on ‘the mods are tired of you’, this one rested on ‘the populace is tired of you’ (or afraid of you, or so on). The Athenian citizens would vote on whether or not to have an ostracism, and then if they decided to have one, the citizens could write down a name. If enough people voted against someone, they would be exiled for 10 years.
The benefits I see from this are threefold:
shared reality on the question “but is Said sufficiently annoying to the bulk of LW citizens?”
asking “who is the worst commenter on LW?”. In several moderation discussions around Said, we’ve come up with various metrics to evaluate “most disliked user”, and identified some problem users that hadn’t risen to mod attention by being involved in large blowups.
Legitimacy of democracy. It’s one thing to say “we’ve received complaints” and another thing to say “yeah, most people don’t want you here”, and I think we can’t say the latter at present.
There are many drawbacks, however.
Elections necessarily involve lower context than expert decision-making. If we have deep and detailed models of moderation, and most users are running some simpler process, ostracisms will be settled by the simpler process instead of the deep models.
Despite lower context, elections generally involve higher cost! A thousand LWers considering the question of which user annoys them the most (net the value they provide) could pretty easily end up taking longer than the moderation discussions that we had, long and extensive as they were.
This also doesn’t involve corrective effort. We talk with problem users relatively early in the process, and sometimes the problem gets solved. This is instead a blunt instrument that knocks people out of the community, and engenders some unpleasant coalitional dynamics. (Would people put my name in, for suggesting that we be willing to exile people at all?)
People have trouble doing the accounting on diffuse responsibility. Will everyone that voted on the Said ban feel bad about LessWrong and their participation in it? How does that help the community being fun?
I generally buy habryka’s model of moderation here. I think there’s something about its applicability to Said that seems somewhat unclear to me.
My story of what happened with Said is that he’s not tracking some of the damage that he’s doing and he doesn’t think it’s his responsibility to track or mitigate that damage.
A lot of that damage started accumulating in people’s impressions on him which would be activated when they first looked at a comment (like most sites, we put usernames before the content), which would them cause them to take more damage on reading the comment than they would have if it were from someone else, in part because they would read it less charitably. “Is Said up to that destructive pattern again?”, they might ask themselves, in a way that makes them more likely to find it.
I think this also would show up in their interpretations of ambiguous evidence. Like, Benquo’s recent post on Said was cited as support for their position by both habryka (in the OP) and Said (here). My read is that both citations are correct because they’re focused on different narrow facets of Benquo’s post.
Unfortunately, once you accumulate enough of this damage it is very hard to restore a good state.
I think on one layer, it’s fair to describe this as “habryka is banning Said because he doesn’t like him.” But I think it’s more fair to describe this as “habryka doesn’t like Said because of <destructive pattern>, and is banning him for <destructive pattern>.”
I will finish with this comment from 12 years ago, in which I criticize Eliezer’s moderation practices. I was missing the concept of emotional tallness, then, and I think also missing the point about the conversation quality being worse because of indirect effects. I can see the younger me levelling a similar criticism at the mod team now.
This seems to be by far the most important crux, nothing else could’ve substantially changed attitudes on either side. Do environments widely recognized for excellence and intellectual progress generally have cultures of harsh and blunt criticism, and to what degree its presence/absence is a load-bearing part? This question also looks pretty important on its own, and the apparent lack of interest/attention is confusing.
To the best of my ability to detect, the answer is clearly and obviously “no” — there’s an important property of people not-bullshitting and not doing the LinkedIn thing, but you can actually do clear and honest and constructively critical communication without assholery (and it seems to me that the people who lump the two together have a skill issue and some sort of color-blindness; because they don’t know how to get the good parts of candor and criticism while not unduly hurting feelings, they assume that it can’t be done).
probably buried in noise, maybe write a question post about it?
Upvoted for this link, which I found valuable.
Criticism is a pretty thankless job. People mostly do it for the status reward, but consider if you detect some potentially fatal flaw in an average post (not written by someone very high status), but you’re not sure because maybe the author has a good explanation or defense, or you misunderstood something. What’s your motivation to spend a lot of effort to write up your arguments? If you’re right, both the post and your efforts to debunk it are quickly forgotten, but if you’re wrong, then the post remains standing/popular/upvoted and your embarrassing comment is left for everyone to see. Writing up a quick “clarifying” question makes more sense from a status/strategic perspective, but I rarely do even that nowadays because I have so little to gain from it, and a lot to lose including my time (including expected time to handle any back and forth) and personal relations with the author (if I didn’t word my comment carefully enough). (And this was before today’s decision, which of course disincentivizes such low-effort criticism even more.)
A few more quick thoughts as I’m not very motivated to get into a long discussion given the likely irreversible nature of the decision:
If you get rid of people like Said or otherwise discourage low-effort criticism, you’ll just get less criticism not better criticism.
Low-effort and even “unproductive” criticism is an important signal, as it tells me at least one pair of eyes went over the post and this is the best they came up with (under low effort), and the authors’ response also tells me something about their attitude towards potential flaws in their ideas. (Compare with posts that have 0 or near 0 comments which isn’t uncommon even from relatively high profile authors like Will MacAskill.)
Turning into Sneer club doesn’t seem like a realistic failure mode for LW. The places like that on the web seem to be deliberately molded that way by their founders/admins. Fully turning into Linkedin also seems unlikely. For example I think any posts by Eliezer will always attract plenty of criticisms due to the status rewards available if someone pointed out a real flaw.
I think in some sense both making top-level posts and criticism are thankless jobs. What is your motivation to spend a lot of effort to write up your arguments in top-level post form in the first place? I feel like all the things you list as making things unrewarding apply to top-level posts just as much as writing critical comments (especially in as much as you are writing on a topic, or on a forum, where people treat any reasoning error or mistake with grave disdain and threats of social punishment).
I don’t buy this. I am much more likely to want to comment on LessWrong (and other forums) if I don’t end up needing to deal with comment-sections that follow the patterns outlined in the OP, and I am generally someone who does lots of criticism and writes lots of critical comments. Many other commenters who I think write plenty of critique have reported similar.
Much of LessWrong has a pretty great reward-landscape for critique. I know that if I comment on a post by Steven Byrnes, or Buck, or Ryan Greenblatt or you, or Scott Alexander or many others, with pretty intense critique, that I will overall end up probably learning something, while also having a pretty good shot at correcting the public record on some important mistakes, and also ending up with real status and credibility within our extended ecosystem. Indeed, you personally come to mind as someone who I have come to respect largely as a result of writing good critiques and comments.
This is generally not the case with Said in my experience. It is very rare that I have a good time responding to any of his critiques, or reading the resulting comment threads. It burns an enormous amount of motivation, and by my best judgement of the situation the critiques do not end up particularly important or relevant when I try to evaluate the work many years later with more distance from the local discussion. They aren’t always wrong, but rarely have the structure of making my understanding actually much deeper.
By far the most likely way you end up with less critique is to make commenting on LessWrong feel generally unrewarding, drag everything into matches of attrition, and create an overall highly negative reward landscape for almost any kind of real or detailed contribution (whether top-level post or critique). If you want more critique, I think the goal is not to never punish critique, but to reward good critique. I am pretty happy with a bunch of things we’ve done for that over the years (like the annual review), and I would like to do more.
Top-level posts are not self-limiting (from a status perspective) in the way I described for a critical comment. If you come up with a great new idea, it can become a popular post read and reread by many over the years and you can become known for being its author. But if you come up with a great critical comment that debunks a post, the post will be downvoted and forgotten, and very few people will remember your role in debunking it.
I agree this is largely true for comments (largely by necessity of how comment visibility works)[1]. Indeed one thing I frequently encourage good commenters to do is to try to generalize their comments more and post them as top-level posts.
And as far as I can tell this is an enormously successful mechanism for getting highly-upvoted posts on LessWrong. Indeed, I would classify the current second most-upvoted post of all time on LessWrong as a post of this kind: https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer
Dialogues were also another attempt at making it so that critique is less self-limiting, by making it so that a more conversation can happen at the same level as a post. I don’t think that plan succeeded amazingly well (largely because dialogues ended up hard to read, and hard to coordinate between authors), but it is a thing I care a lot about and expect to do more work on.
The popular comments section on the frontpage has also changed this situation a non-trivial amount. It is now the case that if you write a very good critique that causes a post to be downvoted, that this will still result in your comment getting a lot of visibility on the frontpage. Indeed, just this very moment we have a critique by sunwillrise with a bunch more karma than the post it is replying to prominent on the frontpage:
Though I do think it’s been changing and we’ve made some improvements on this dimension, see my last paragraph
I disagree. Posts seem to have an outsized effect and will often be read a bunch before any solid criticisms appear. Then are spread even given high quality rebuttals… if those ever materialize.
I also think you’re referring to a group of people who write high quality posts typically and handle criticism well, while others don’t handle criticism well. Despite liking many of his posts, Duncan is an example of this.
As for Said specifically, I’ve been annoyed at reading his argumentation a few times, but then also find him saying something obvious and insightful that no one else pointed out anywhere in the comments. Losing that is unfortunate. I don’t think there’s enough “this seems wrong or questionable, why do you believe this?”
Said is definitely more rough than I’d like, but I also do think there’s a hole there that people are hesitant to fill.
So I do agree with Wei that you’ll just get less criticism, especially since I do feel like LessWrong has been growing implicitly less favorable towards quality critiques and more favorable towards vibey critiques. That is, another dangerous attractor is the Twitter/X attractor, wherein arguments do exist but they matter to the overall discourse less than whether or not someone puts out something that directionally ‘sounds good’. I think this is much more likely than the sneer attractor or the linkedin attractor.
I also think that while the frontpage comments section has been good for surfacing critique, it encourages the “this sounds like the right vibe” substantially. As well as a mentality of reading the comments before the post, encouraging faction mentality.
FWIW I feel like I get sufficient status reward for criticism and this moderation decision basically won’t affect my behavior
This defended a paper where I was lead author, which got 8 million views on Twitter and was possibly the most important research output by my current employer, against criticism that it was p-hacking
This got me a bounty of $700 or so (which I think I declined or forgot about?) and citation in a follow-up post
This ratioed the OP by 3:1 and induced a thoughtful response by OP that helped me learn some nontrivial stats facts
This got 73 karma and was the most important counterpoint to what I still think are mostly wrong and overrated views on nanotech
This got 70 karma and only took about an hour to write, and could have been 5 minutes if I were a better writer
Now it’s true that most of these comments are super long and high effort. But it’s possible to get status reward for lower effort comments too, e.g. this, though it feels more like springing a “gotcha”. Many of the examples of Said’s critiques in the post at least seemed either deliberately inflammatory or unhelpful or targeted at some procedural point that isn’t maximally relevant.
As for risking being wrong, this is the only “bad” recent comment of mine I can remember, and I think you have to be pretty risk averse to be totally discouraged from commenting. If 30% of my comments were wrong I would probably feel discouraged but if it were 15% I’d just be less confident or hedge more. Probably the main change I’ll make is to shift away from this uncommon and very marginal type of comment that imposes costs on the author and might be wrong, to just downvote and move on.
If you didn’t have the motivation to write your arguments, why did you waste your time reading the post? If you debunk the author’s post, they’re unlikely to forget it. If you debunk numerous posts, then you may acquire a reputation. If you debunk a popular post, then many people see the debunking. You’ve also spared yourself the labor of debunking future posts based on the initial flawed idea. The reward for delivering valid and empathetic criticism is cultivating a community of truth seekers in which you and others may be willing and able to participate. Do you lack that vision? Do you have that outled elsewhere? Do you not care about developing community? Do you simply have better things to do and want to freeload on the community that others build?
You took the time to read the post, but you won’t write a “quick” clarifying question because you’re worried about wasting your time, and you think you have little to gain by understanding the content, so you’re depending on commenters like Said to do the job? If you have the time for just the initial question but not the back and forth, just write the first question and read the response. It takes little more time to put a brief friendly signal at the top of the comment than to leave it out. One may also practice writing in a non-contemptuous manner until it comes naturally, learn to skim posts and read only those clearly likely to be worth responding to. It is possible to deliver low-effort criticism without being a flagrant asshole about it.
How do you know? Have you gathered data on this topic? Have you moderated a community? Have you observed the course of a substantial number of comparable moderation decisions in the past? What exactly is your model of the overall community reaction to such moderation decisions that leads you to this conclusion?
A signal of what? Important to whom? Are you really interested in what a low-effort troll would have to say in response to what you happen to write and post online?
If posts worth criticicizing, due to their intellectual quality and interest of the community, will receive their due criticism, then why can’t weak and uninteresting posts can be ignored or engaged with by a charitable volunteer as a teacher might respond to a student in order to develop their capabilities? Targeting weak and forgettable posts for unwarranted criticism increases their prominence in a quite mechanistic fashion due to the high-variance upvotes, the intrigue of seeing why a comment was strongly downvoted, the fact that the LessWrong homepage boosts new and highly upvoted comments, and because the author may feel attacked and respond in an endless comment chain. There are selection effects on who stays in the community under these conditions. Solve for the equilibrium.
If you’re right, the author and those who read the comments gain a better understanding; if you’re wrong, you do. I think framing criticism as a status contest hurts your motivation to comment more than it helps, here.
I think these status motivations/dynamics are active whether or not you consciously think of them, because your subconscious is already constantly making status calculations. It’s possible consciously framing things this way makes it even worse, “hurts your motivation to comment” even more, but it seems unavoidable if we want to explicitly discuss these dynamics. (Sometimes I do deliberately avoid bringing up status in a discussion due to such effects, but here the OP already talked about status a bunch, and it seems like an unavoidable issue anyway.)
Making status calculations at all times is a choice you have the right to make, but in my opinion it’s a bad one.
It’s more useful to frame this in terms of particular norms, because different contexts activate different norms. It’s possible to deliberatively cultivate or suppress specific norms in specific contexts (including those that take the form of status calculations, which is not all of them), shaping them in the long run rather than passively acknowledging their influence.
This is very indirect and so the feedback loops are terrible, it seems that usually you’d need to intervene at the background dynamics that would encourage/discourage the norms on their own (such as prevailing framings and terminology, making different actions or incentives more salient), not even intervening by encouraging/discouraging the norms directly.
I’m skeptical that it’s possible to use norms to suppress status calculations, and even more skeptical that it’s possible without huge cost/effort, beyond what typical LW members would be willing to pay. It’s hard for me to think of any groups or communities whose members have managed to suppress their status motivations/calculations. (It seems a lot more feasible/productive to exploit or redirect such motivations in various ways.) But if you have more to say about this, I’d be very curious to hear you out.
Not suppress status calculations of course, my point is about uses of being specific about particular norms that contribute to such status calculations (as well as norms that are not about status calculations). This should enable some agency in shaping incentives (by influencing specific norms according to their expected effects), rather than settling to cynically pointing out that status calculations are an immutable part of human nature, at least for most people. That is, the content of the status calculations is not immutable.
Probably you are thinking about a particular application of norm-shaping that wouldn’t work, while I was responding to what I perceived as a framing suggesting a general dismissal of norm-shaping as a useful thing to consider. This parenthetical sure seems to thicken the plot. (Maybe you are somehow intending the same point, in a way I’m not seeing, while also being skeptical of me making the same point, meaning that you are not seeing that I’m making the same point, possibly because it wouldn’t be a good response to your own intended point that I’m misunderstanding...)
Ok, I think we’re not disagreeing, I just misunderstood your comment. Thanks for clarifying.
Social motivations seem unavoidable, but I don’t see why those social motivations would be unavoidably in terms of a single-dimensional “global status” score. Some of my earliest posts on lesswrong are my attempt to guess at plausible mechanisms of social motivation and I continue to not be convinced that this single dimensional status view is obligatory, rather than merely socially self-reinforcing.
I think the standard 2-dimensional dominance/prestige model of social status (which can be simplified into just prestige here since dominance mostly doesn’t apply to LW) has a lot going for it, and balances well between complexity and realism/explanatory power. But I would be happy to consider a more complex and realistic model if the situation calls for it (i.e., the simpler model misses something important in the current situation). Can you explain more what you think it’s missing here, if anything? (I did skim your post but nothing jumped out at me as adding a lot of value here.)
I buy that prestige is a meaningful and common first PCA dimension in communities where it’s already common, which does seem likely to be most groups. I don’t mean to convey anything beyond ongoing irritation at people assuming the mental parts are fundamentally unable to be reconfigured for something less trapped than a type that collapses to a single global ordering. One basic change would be having a per-relationship personal rating of “your prestige with me”, or even ” your prestige with me on a topic”. But also, I find it frustrating that a single status dimension is still common parlance when prestige/dominance is available. I’m not saying anything immediately relevant, I’m complaining that you said people are always making status calculations, and that that seems oversimplified and overconfident. Moreover if you’re correct, I see it as a problem to be fixed.
I used “status” instead of “prestige/dominance” because it’s shorter and I think most people on LW already know the prestige/dominance model of status and will understand that I’m not referring to a scalar quantity by “status”. People use single words to refer to quantities that are more complex than scalars all the time. For example when I say “he’s really artistic” I obviously don’t mean to suggest that there’s just a single dimension of artistry.
To try to guess at why you made this complaint, maybe you’re thinking that a lot of people do have an over-simplified single-dimensional model of status, and by using “status” I’m feeding into or failing to help correct this mistake. If so, can you point to some clear evidence of such mistakes, i.e., beyond just people using the word “status”?
the latter seems right, I don’t have a handy link, but I’ll be on the lookout for concrete examples and come back to this, eta 2 weeks, / or * 2
I believe this is simply false: instead of criticism like “Your idea is stupid and wrong,” you will get criticism like “you have failed to elaborate on this detail of your brilliant and insightful idea,” which is markedly better.
This comment seems overly sarcastic/snarky, or if not, written in a way that seems weirdly ambiguous. I think it would be good for you to phrase it more straightforwardly (at the present, any response would have to start with disentangling the ambiguity/irony/sarcasm, and also risk potential embarrassment as a result of misunderstanding).
I have not read this post yet (I assume it’s about more than just Said), but just to be clear: I personally trust you guys to ban people that are worth banning without writing thousands of words about it.
(Have read the post.) I disagree. I think overall habryka has gone through much greater pains than I think he should have to, but I don’t think this post is a part he should have skimped on. I would feel pretty negative about it if habryka had banned Said without an extensive explanation for why (modulo past discussions already kinda providing an explanation). I’d expect less transparency/effort for banning less important users.
I disagree—I trust them, but I still think the process is important. If you don’t want to read the words, you don’t have to, but I feel better that they’re there.
My experience of Said has been mostly as described, a strong sense of sneer on mine and others posts that I find unpleasant.
I think there’s a large swathe of experience/understanding that Said doesn’t have, and which no amount of his socratic questioning will ever actually create that understanding- and it’s not designed for Said to try to understand, but to punish others for not making sense in Said’s worldview.
Thank you for this decision.
(My own writing, from here.)
I also have noticed in the past his sometimes unusually hostile/gaslighting/uncharitable/unproductive war-of-attrition discussion style when he disagrees with someone, described here in detail by habryka. Including his aggressive/escalating voting behavior in simple one-to-one disagreements, also mentioned by habryka. (I also wondered whether sock puppet accounts or specific “voting friends” are involved, but as far as I see habryka didn’t mention these exist, which is some evidence that they don’t.) I have not seen anyone else act like that, so I don’t think this is a case of just “banning people who voice criticism”. There are countless people posting outspoken criticisms without remotely employing an unconstructive style like that.
Reading now that this has been going on for many years, including temporary bans, I believe this is a psychological property of his personality, and likely not something he can really control. Similar to how some people have a natural tendency to voice disagreements in friendly and productive manner, but the other way round.
As I have said before, on the object-level topic of Said Achmiz, I have written all I care about here, and I shall not pollute this thread further by digressing into that again. My thoughts on this topic are well-documented at those links, if anyone is interested.
It’s an understatement to say I think this is the wrong decision by the moderators. I disagree with it completely and I think it represents a critical step backwards for this site, not just in isolation but also more broadly because of what it illustrates about how moderators on this site view their powers and responsibilities and what proper norms of user behavior are. This isn’t the first time I have disagreed with moderators (in particular, Habryka) about matters I view as essential to this site’s continued epistemic success,[1] but it will be the last.
I have written words about why I view Said and Said-like contributions as critical. But words are wind, in this case. Perhaps actions speak louder. I will be deactivating my account[2] and permanently quitting this site, in protest of this decision.
It doesn’t make me happy to do so, as I’ve had some great interactions on here that have helped me learn and grow my understanding of a lot of important topics. And I… hope I’ve managed to give some back too, and that at least some users here have benefitted from reading my contributions. But sometimes nice things come to an end.
In so far as that’s still a primary goal, see here
For ease of navigation if anyone wants to view my profile in the future, I probably will not actually employ the “deactivate account” feature, but I will clearly note my departure there regardless
This seems like not a useful move. Your contributions, in my view, consistently avoid the thing that makes Said’s a problem. Your criticisms will be missed.
Seconded, I consistently find your comments both much more valuable and ~zero sneer. I would be dismayed by moderation actions towards you, while supporting those against Said. You might not have a sense of how his are different, but you automatically avoid the costly things he brings.
I think you shouldn’t leave, and Habryka shouldn’t have so prominently talked about leaving LW as something one should consider doing in response to this post. LW is the best place by far to discuss certain topics, and nowhere else provides comparable utility if one was interested in these topics. It’s technically true but misleading to say “There are many other places on the internet to read interesting ideas, to discuss with others, to participate in a community.” This underplays not only the immense value that LW provides to its members but also the value that a member could provide to LW and potentially to the world by influencing its discourse.
For your part, I think “quitting in protest” is unlikely to accomplish anything positive, and I’d much rather have your voice around than the (seemingly tiny) chance that your leaving causes Habryka to change his mind.
I definitely didn’t intend to communicate that it should be considered cheap to leave LessWrong (that’s why the next sentence says “I think LessWrong is worth a lot to a lot of people”).
I just meant to communicate that in terms of something like “basic needs” that a person might experience, LessWrong is very rarely a necessary component of getting those filled (which is an important threshold as there exist threats that people face that do threaten your basic needs more, and which hence make sense to be engaged with differently).
Then edit it
Sorry, I mean, my next sentence is literally saying “I think LessWrong is worth a lot to a lot of people”, which seems sufficient to pre-empt that misunderstanding.
I think that section as written is communicating the thing I want to communicate. Of course I could do a general editing pass to make it clearer, but I am not like, seeing anything particularly wrong with what I have written.
My first reaction is that this is bad decision theory.
It makes sense to actualize on strikes when the party it’s against would not otherwise be aware of or willing to act on the preferences of people whose product they’re utilizing. It can also make sense if you believe the other party is vulnerable to coercion and you want to extort them. If you do want fair trade and credibly believe the other party is knowing and willing, the meta strategy is to simply threaten your quorum, and never actually have to strike.
We don’t seem to be in the case where an early strike makes sense. The major reaction to this post is not of an unheard or silenced opposition, but various flavours of support. In order for the moderators to cede to your demand, they have to explicitly overrule a greater weight of other people’s preferences on the basis that those people will be less mean about it. But we’re on LessWrong, people here are not broadly open to coercion.
Additively, we also don’t seem to be in a world where your preferences have been marginalized beyond the degree that they’re the minority preference. The moderators clearly spent a huge personal cost and took a huge time delay precisely because preferences of your kind are being weighed heavily.
Given the moderators are presumably not going to act on this, and would seemingly be wrong to do so, this comment reads as someone hurting themselves and others to make moderation incentives worse. Harming people to encourage bad outcomes is not something LessWrong should endorse.
I respect the integrity and strength of person needed to take a personal cost to defend someone against a harm, or a moral position. I think it’s honourable to credibly threaten to act in self-sacrificial ways. Yet, there are right and wrong ways to do this. This one strikes me as wrong.
I disagree about the decision theory. the move i see is “create incentive to not do thing i consider bad” and it’s just… fine. extorting people doesn’t make sense, timelessly, people should ignore such treads. but act in a way that incentivize good things and disincentivized bad things is just good.
it sounds that you refer to threads that exist only if the other side concede to them as the way to go, but this is not thing that ideal agents would do.
and yet. it’s decision theory you start you comment with. generally, I’m just confused, and can even locate this confusion.
Let me join the chorus: please do not leave in protest; your comments here do some of the same positive things that Said’s comments do, and your leaving would have a bunch of the negative consequences of Said’s banning without the positive ones (because, at least so it seems to me, you are much less annoying than Said).
(For the avoidance of doubt, I find you a net-positive commenter here for reasons other than that you do some of the useful things Said has done, but that particular aspect seems the most relevant on this occasion.)
I join the choir of people pronouncing they are sad to see you go.
Will you be writing elsewhere? I’ve benefited a lot from some of your comments, and would be bummed to see you leave.
I will be sad to see you go.
Before you quit, maybe we can create a wiki page of people who left, with contact information, to open the door for a refugee forum at some point in the future?
While I did not wish for you to leave, I am strangely satisfied that you have left, as my recollection is that you have threatened it before, and I would feel gaslit if those had been empty threats.
As I recall, after the last time you were involved in a thread about Said you deactivated your account, and then eventually came back. My colleague has pointed out to me that, according to the database, you have activated the ‘deactivate my account’ feature on 12 occasions, each time coming back. I hope for your own dignity that you indeed do leave and do not backtrack on this for at least 2 years.
On the contrary, I’m hopeful sunwillrise sees the reaction to their leaving and updates on that. I think your comment here is unreasonable and petty.
(I too hope that, but also think it is kind of important to understand that sunwillrise has deactivated their account really a lot of times before, more than any other user I can think of, and has said they were leaving before, IIRC. I do think they are a good commenter and would be sad to see them leave.)
I am surprised that user data is analyzed that way, and then also that it is published here when someone has left or declared intention to do so.
(Whether someone deactivates their account is public info, you could just go through the internet archive of any page where sunwillrise commented and count how many times their username display changed)
I do not think that such a theoretically possible effort is comparable to site moderators summarizing and publishing the information in an argument.
(I don’t understand this comment. It would be like 10 minutes of effort to figure this out, so maybe there is some misunderstanding about how one would go about this. Also in-general, if anyone wants any kind of information that can be figured out from public information like this, feel free to ping the admins and we will tell you)
I think people don’t usually even try to figure something like that out, or are even aware of the option. So if you publicly announce that a user has deactivated their account X times, then this is information that almost no one would otherwise ever receive.
I also have the sense that it’s better to not do that, even though I have a hard time explaining in words why that is.
Please just ask us if you want publicly available but annoying to get information about LW posts! (for example, if you want a past revision of a post that was public at some point)
I’ve answered requests like that many times over the years and will continue to do that (of course barring some exceptional circumstances like doxxing or people accidentally leaking actually sensitive private data)
I read the whole post and appreciated the detail and the decision. I have had discussions with Said that were valuable, and I am sad to see that he didn’t change what I consider to be a bad pattern in order to continue the version of it that’s good. I’ve mostly just been impressed with sunwillrise’s version of it lately, for example. I also try to do a version of this occasionally, and it’s not clear to me my contributions are uniformly good. Input welcome. But I sometimes go through and try to find posts with no comments, see if I have anything to say about them, and try to both try to describe something I found positive and ask about something that confused me. Hopefully that’s been helpful.
Many years ago I lurked on LessWrong, making a very occasional comment but finding the ideas and discussion fascinating and appealing. I believe I am not as smart as the average commenter here, and I am certainly less formally educated. I eventually drifted away to follow other of my interests and did not put in the work to learn enough to feel like I could contribute meaningfully. I specifically recall Said Achmiz as being a commenter I was afraid of and did not want to engage with. I didn’t leave entirely because of Said, it was more about the effort of learning all the concepts, but maybe 1⁄8 of my decision was based on him. I imagine his attitude towards this will be, if I’m too much of a coward to risk an unknown internet commenter saying possibly bad things about my own comments, then I really don’t belong here anyway. Which, maybe it’s true. I don’t know if I will try again in the upcoming 3 years, but I’m more likely to than before Said was banned.
Context: I much more recently gravitated to the Duncansphere, as it were, and am kinda on the fringes of that these days (I missed the Duncan/Said thing, and only know about it from comments on this post). I was encouraged there to come here and post this anecdote.
Thank you for the information! It seems good to get accounts like this from actual literal people. It also seems a little bad that someone is encouraging people who haven’t interacted with Said [edited: on LessWrong.com] to come comment on his ban post. That seems like it could lead to bad dynamics.
(It was me, and in the place where I encouraged DrShiny to come here and repeat what they’d already said unprompted, I also offered $5 to anybody who disagreed with the Said ban to please come and leave that comment as well.)
Appreciated and information received
Well, this is someone who hasn’t interacted with Said in the sense of exchanging words. They have interacted with him in the sense that Said’s comments have marginally changed the trajectory of thir life. (So maybe we say they haven’t interacted with Said but Said has interacted with them? But that seems like the more important direction here.)
Like, some rando who never heard of LW or Said Achmiz chiming in to say “I would have found Said unpleasant if I’d been here” would feel a bit weird to me. Not off topic but also not very meaningful, and I’d be worried about selection effects. (Which I take to be the bad dynamics you’re thinking of. “We don’t get an unbiased sample of randos, so it’s hard to tell what randos-in-aggregate think.”)
But here… sure, there’s still some chance of selection effects, and it’s good to keep them in mind, which Duncan did.[1]
But there’s also selection effects that come from “people somewhat driven away by Said are less likely to be here than people nonewhat driven away by him”, and encouraging DrShiny to comment is a way of counteracting those.
So like, I think it’s good to notice the thing that you noticed, we should indeed be paying attention to such things, but ultimately I don’t think it was bad.
Granted, his attempt to fight against them presumably wasn’t 100% successful in expectation. Duncan’s discord members are probably somewhat selected in the direction of disliking Said, though I think less than a lot of people would guess.
Another thing that seems relevant: I claim the members are also somewhat selected for “people who would be a good fit for LW if they feel like being here”, and I haven’t spoken to DrShiny much but from what I have I believe they are such a person.
I meant to write “on LessWrong” and screwed up, aaargh! Thank you for noticing
Edit: That doesn’t answer your comment directly. Yeah, I’m still not super comfortable with the brigading-y dynamics, but am okay with them existing in a called-out form.
I am disappointed and dismayed.
This post contains what feels to me like an awful lot of psychoanalysis of the LW readership, assertions like “it is clear to most authors and readers”, and a second-person narrative about what it is like to post here:
And like, man, is that true? Did you conduct a poll? I didn’t get a survey. You pay some attention to Zack’s perspective on Said, maybe because it’d be kind of laughable to pretend you hadn’t heard about it; but I’m one of the less-strident people Zack commiserates with about Said’s travails, and you had access to my opinion on the matter if you were willing to listen to a wheel that only squeaked a little bit. My comment is toplevel and has lots of votes and netted positive on both karma and agreement and most of the nested remarks are about whether it was polite of me to compare a non-Said person to a weird bug.
This post spends so much time talking about the complaints you’ve gotten, the experiences you imagine complainants having, the clear communication you envision occurring between “most” of a population and your target here. I believe that you’ve received complaints. I understand why you might not choose to publish their privately-submitted text. It does leave me with not that much to go on besides public comment wars, and it seems like I don’t interpret them the same way you do.
Am I going to loudly quit the site? Well, uh. Despite your protestations to the contrary I don’t actually think you will care if I do or not. I don’t write here that much any more. There was a time when I read every post on LessWrong and treated writing stuff here like it was my job, but no longer. I made a bug report the other day and I just checked and the bug still reproduces. You had access to my opinion on Said and the “you” in this post is clearly not about me. I don’t think it will really affect you if I stay or go. But if I do absentmindedly navigate here out of sixteen year habit I’m going to have a bad taste in my mouth about it.
I am sorry you didn’t like the post! I do think if you were still more active here, I would have probably reached out in some form (I am aware of that one comment you left a while ago, and disagreed with it).
I generally respect you and wish you participated more on the site and also do think of you as someone whose opinion I would be interested in on this and other topics.
I think the narrative above pretty accurately describes the experiences of a bunch of authors. I only ran it by like 2-3 non-LW team members since this post already took an enormous amount of time to write. I am of course not intending to capture some kind of universal experience on LessWrong, and of course definitely wouldn’t be aiming for that section to represent your experience on LessWrong, since I don’t think you ever had any of the relevant interactions with Said, at least since I’ve been running LW.
I do think I stand behind the sentence that precedes this “it is clear to most authors and readers”.[1] I don’t think beyond that I am doing that much psychoanalysis of the LW readership, or if I do, I don’t think I am doing so in a particularly weird or bad way? It’s a key part of my job as a moderator to understand what is going on in the brains of the LW commenters, so of course I will have lots of models of that.
Overall… I would like to respond more, and genuinely care about what you think on this topic, but also I am not really sure what your actual complaint is. I understand you like Said as a contributor on the site, though I really don’t have much detail on your opinion. I understand you dislike something about the explanation of my background models for this decision, maybe something to do with how I speak with too much authority or bias about reader and author-experience on the site, but I don’t think I have enough context to respond.
If you don’t feel like elaborating I might take a half hour or so and make best guesses at what you meant, and then respond to that, but I would appreciate more clarification, and am genuinely curious about your models here.
For reader convenience, here is the relevant paragraph:
You disagreed with it? What about it? I just read it over again and it doesn’t make a lot of claims I could imagine you disagreeing with—for one thing I talk about my personal experience of Said, not his objective properties.
Okay, but… why. Why do you think that. Is there a reason you think that, which other people could inspect your reasoning on, which is more viewable than unenumerated “complaints”? Again, I believe the complaints exist. How many, order of magnitude? Were they all from unique complainants? I believe you that you have spent many many hours on this matter: what did you spend them doing?
Like I said in the comment linked in the grandparent, Said is on a very short list of people who I have a persistent impression of at all—without my ever having met him in person or talked to him on another website, even—and has left a consistently positive impression. Maybe I missed all his egregious comments, but I clicked some of the handpicked examples in your post and they just don’t seem that bad? Perhaps he is equipped with a whistle that emits vibes at a frequency I cannot hear. I can’t rule that out. It just has not been demonstrated to me by any metric other than you not liking moderating him.
To be clear: I think it would in fact be valid to run your moderation decisions on you not liking moderating him. If you have a mod team, and everybody on the mod team is like “I am sick and tired of being called in to deal with Said-related things”, and you announce, “we don’t seem to employ anyone who is willing to deal with Said-related things and we don’t have the budget to hire a new moderator who’ll have that as an explicit part of their job description, so bye Said”, that would be in my view perfectly licit. If you want to appeal solely to it being time-consuming and unpleasant to moderate a guy, you can, and I wouldn’t object much to that no matter how much I like the guy, I’d just be puzzled about how polarizing he manages to be. That’s kinda how one of my favorite Discord servers runs—if everybody on the mod team one by one becomes weary of the prospect of yet another local norms conversation with some server member, bye server member.
Instead you are trying to appeal to some less subjective principles, so—why do you think these principles obtain, and why do you believe they’ve been violated here? Just-so stories about what you imagine it might be like for one of your anonymous complainants to write on LW till they sadly plod away, ears ringing with the mysterious vibe-whistle, do not answer these questions for me. You don’t have to answer the questions but you’ve chosen to stake out a position that prompts them.
I mean, I really tried to explain a lot of my models for what I think the underlying generators of this are. That’s why the post is 15,000 words long.
To be clear, LessWrong is not a democracy, and while I think the complaints are important, I don’t consider them to be the central part of this post. I tried to explain in more mechanistic terms what I think is going wrong in conversations with Said, and those mechanistic terms are where my cruxes for this decision are located. If I changed my mind on those, I would make different decisions. If all the complaints disappeared, but I still had the same opinions on the underlying mechanics, then I would still make the same decision.
I link to something like 5-15 comment threads in the post above. Many of the complaints are on those comments threads and so are public. See for example the Benquo ones that I have quoted in the post. I also link to the most recent thread with Gordon, and you of course have seen the thread with Duncan and Said.
Beyond that, I’ve received many complaints (probably on the order of a hundred) of the “LessWrong gives me Sneer Club vibe” nature[1], mostly from people who do not post here regularly but I wish they would (for a random datapoint on this, Nate Soares recently complained about this to me). Most of those do not mention Said’s by name, though sometimes when I try to dig deeper into how they formed their impression I find them pointing to some comment thread in which Said was active. There are also some other commenters who have historically been pointed to a lot.
And then beyond that, my guess is there are another 10-ish private complaints I’ve gotten from active commenters or posters on LW about Said, usually in spoken conversation, sometimes in DMs or off-hand comments in other thread.
The thing I can say most confidently is that almost no one else even remotely comes close to these numbers. I approximately never receive complaints about anyone specific on LW. I can think of maybe one commenter where the number of datapoints similar to any of the above would reach above 2-3. Said is a huge enormous outlier, with there being at most one or two other commenter who maybe have received anywhere close to a similar number of complaints about their behavior on LW 2.0, and who are no longer active and so where the question of moderation seems kind of moot.
I think some of the outlier-ish nature of this can be attributed to there having been lots of high-profile moderation threads involving Said and so people feeling more inclined to express their opinions on this matter, and I find it hard to fully adjust for that, but it really seems very unambiguous to me that Said is the most complained-about user presently active on LW.
Most of my time on this issue was spent writing the ~100,000 words I’ve written on the topics that tend to come up in the linked threads about LessWrong culture, and moderation principles. Second to that, my time on this broader issue was spent talking in-person or in DMs to active contributors to LW, or people who I would like to become active contributors about the relevant dynamics.
I mean, again, I did really try to explain a lot of the underlying models and explain things in mechanistic terms. I think it’s totally fine for you to disagree with my arguments in the OP, or to disagree with the authors who have made public complaints about Said, or for you to either discount, or disbelieve my account of private complaints to me, but like, I did try to explain.
The post above really isn’t mostly about the cost of moderating Said, though of course many of my personal interactions with Said are in the context of conversations about moderation. My complaints are centrally about the conduct he had in those conversations, which helped me form the models I try to explain in the post above, not about the cost of moderation itself (which I think would be weirdly circular, since I could just choose to not moderate him, which would make the supposed cost disappear, so at the very least there is a burden that needs to be established that Said frequently requires some moderator action to be taken, which I do think the complaints are helpful for establishing).
And like, to be realistic, I think the post, despite it’s 15,000 word length, still doesn’t really remotely capture the complexity of the social dynamics that are present in moderation calls like this. There are lots of additional models that I have that I could bring up here, and elaborate more on. It seems to me you understand that at the end of the day it’s not super realistic to make these moderation decisions in a way that a moderator could hope to end up with a clear and close-to-universally compelling explanation for all of their decisions. But I nevertheless think it’s good to try, especially in as much as I am trying to derive case-law from this case, which I do think is a good instinct to have.
Maybe that’s where we disagree and you think I am doing something that is actively bad by trying to elaborate on my models here, instead of just owning up to this being something closer to a personal preference which other people should accommodate and keep track of as being costly about me. I think at least on the meta level I agree with you that in as much as my reasoning in this post was full of holes and weak arguments, that it would be better to instead make a post that is shorter and doesn’t aim for some more universally compelling arguments, and just puts the cost “on my tab” so to speak. I of course do think the arguments in the post should be compelling to many, and furthermore think it’s good even for fully corrupt-moderators/clever-arguers to make their reasoning explicit, for it at least puts some pressure on making my calls here consistent with future calls, which I think helps reign in some risk of abuse of power.
I do think some miscommunication must be happening here. Your OP comment to me reads as clearly upset and implying that I’ve done something worthy of harsh social judgement. It doesn’t sound to me like you are actually saying that I don’t have to answer these questions. Or maybe my read expressed in my previous paragraph is right and you think the bad thing that happened is me making bad arguments for banning Said, which is much worse than no arguments for banning Said, by your lights. Let me know if so, if not, I don’t think your top-level comment feels compatible with your assertion here that I don’t actually have a burden of proof here (though of course possibly you mean a third thing, and I am not trying to express enormous confidence on that not being the case).
Not generally using the word “sneer club” but the whole general category of complaints of “my experience of posting on LW is that someone will tear very aggressively into some part of my post which I didn’t consider particularly important, and then I will waste a ton of time in pretty aggro internet discussion”.
You spend a lot of words trying very hard to explain a thing that is not the same thing that I wanted to know. Perhaps lots of other people wanted to know it? I can only speak for myself.
Yeah, I know. I provided an example of a way you could have chosen to openly run it as an oligarchy (“anyone the entire mod team is sick of for any reason is banned because we have no one to moderate them”) and that I would have respected. Let us call this proposed oligarchical model something fun like “Modularity”—once no mod(ule)s are compatible with somebody, that somebody can’t be on the site. You are doing a thing other than that with more moving parts.
These moving parts in particular. If you were implementing Modularity, it wouldn’t matter what was going wrong. Your bare word is more than sufficient to convince me that something is going wrong, and “something is going wrong” would be enough for you to refuse responsibility for further dealing with Said where Things Go Wrong All The Time for whatever reason. No troubleshooting burden would exist, no explanatory burden would be called for. You could just not like his face, and get the same result. But you’re not doing Modularity. You’re doing something where you write thousands of words about why you think his face is objectively bad.
Thank you for the numbers!
I know! I can tell you tried. You did not communicate all the things I wanted to know, though you have here ameliorated that somewhat; I am agnostic about whether this is because you were originally trying to communicate some other thing (perhaps to some other audience) or because communication is just hard and even trying to communicate the correct thing sometimes does not work.
No, not particularly. You’re not implementing Modularity, you’re doing this complicated model-backed thing and you want to explain your complicated model-backed thing. I think that if you were doing Modularity that would be a) respectable and b) require very little digital ink-spilling, but you are not in fact doing either the policy I made up nor any policy closely related to it, so you might as well explain what you are doing.
Yeah, you reading additional content into text as though it were clear is a theme here. I said in words that I was disappointed and dismayed. If you are ascribing more emotions to me than those ones, kindly cut it out. I wouldn’t have bothered commenting at all if you hadn’t expressly announced you wanted to hear from the peanut gallery on this one, I’m not making any claims about what social judgment you deserve.
You literally do not have to; the power of Alicorn might completely fail to compel you and then the consequences of this would be nil. Is there some passphrase which communicates a question and also acknowledges that nothing bad happens to you if you don’t answer them or are we operating under a guess culture so extreme that this is impossible? I mean, if you ignored me I might be sorta irked. Maybe I would make sarcastic remarks about it with my pals. I’m not gonna try to get you fired or anything.
No; given that you are doing an argument-driven decisionmaking process, providing the arguments is the right call. I just brought up a process that would be respectable yet not argument-driven.
I’m not sure what you mean by this but perhaps some other thing I said will happen to clarify something usefully.
Cool, I think this clarified a bunch. Summarizing roughly where I think you are at:
In moderation space, there is one way to run things that feels pretty straightforward to you, which you here for convenience called “modularity”, where you treat moderation as a pragmatic thing for which “I don’t have the resources to deal with this kind of person” without much explanation or elaboration is par for the course. You are both confused, and at least somewhat concerned about what I am trying to do in the OP, which is clearly not that thing.
There are at least two dimensions on which you feel concerned/confused about what is going on:
What is the thing I am trying to do here? Am I trying to make some universally compelling argument for badness? Am I trying to rally up a mob to hate on Said so that I can maintain legitimacy? Am I trying to do some complicated legal system thing with precedent and laws?
What do I actually think is going wrong in conversations with Said? Like, where are the supposed terrible comments of his, or things that feel like they are supposed to be compelling to someone like you by whatever standard by system of moderation is trying to achieve? There are a lot of words in the OP, but they feel like they aren’t really addressing this (in part because it’s not super clear to you what standard the explanations are aiming for establishing).
Probably I got various pieces of affect wrong, but this is currently my best guess of where you are at in this thread.
I think this is a reasonable perspective, so I’ll respond to it, assuming that it’s reasonably accurate, though let me know if I got something important wrong.
What is the thing I am trying to do here?
I agree with you that something like the “modularity” approach is a reasonable baseline. And indeed, I do think a huge fraction of this announcement should be seen as “look, it’s been too long, I’ve spent a lot of energy on this, I am not dealing with this anymore, let’s part ways”, and I would, in many circumstances consider that a sufficient thing for a moderator of a forum like LessWrong to say.
But I think there are a few reasons why I am not satisfied with that in this circumstance:
LessWrong is less straightforwardly my personal fiefdom to run than most online forums. I did not found this place, Eliezer did. I have inherited what I consider a really important cultural institution, and in that context, I feel a responsibility to justify what I do with it with something more like good universal principles. Realistically as moderators I will need me and my team to have some freedom to occasionally just say “look, I can’t deal with this person”, but IMO at a larger scale, I think the right to run LessWrong should be earned.
Beyond that, the thing I am trying to do with LessWrong is aiming bigger, and aiming to do something somewhat more robust than to “just” build a functional community. The Rationality community is one of the biggest online communities in the world, and has been surprisingly impactful on the trajectory of civilization[1]. I think something closer to courts and debates on the nature of justice and principles and checks and balances is appropriate for something that I want society to be able to put weight on (for example, LessWrong is by far the most active discussion forum for AI Alignment, and within those confines occupies an important social role).
And then there is also a third thing, which makes me want to be particularly detailed and concrete and give lots of arguments and models here, which is more relevant for Said in-particular. And that thing is the feeling of being insane, of at the same time really feeling like something hurtful and harmful is happening, while tons of people around you are denying that any such thing is going on, is at the core of a lot of the complaints I have received over the years about Said, and is also at the core of my own bad experiences with Said. More specifically the thing where I read a top-level that I cannot help but read in a sneering voice, dripping with judgement, pointing a finger at me or the author in a way that summons judgement and punishment, but which as soon as its called out disappears, denies it ever existed, or keeps slipping away, redefining itself in endless circles.
And I don’t know, maybe any attempt to disentangle the things that are going on and to try to make some kind of compelling demonstration that I and others are not insane is doomed. But I currently think it’s a valuable service, and something I owe to a lot of the people who had bad experiences on LessWrong over the years with Said, and something for which I myself would benefit from recognition and understanding. I also think it’s the cause of the death of many many institutions in the world at large, and while I think LessWrong could survive for a while on the moderators just appealing to “look, I just can’t deal with this guy”, my best guess is we would lose some social legitimacy each time, and this would substantially limit what LessWrong can achieve over the long run.
What do I actually think is going wrong in conversations with Said?
So now maybe let’s get a bit more into the object-level. I do think I have pointed to the biggest component of what I think is going wrong in the previous two paragraphs.
My model of you, based both on your last reply, and other things I know about you, is currently not very compelled, and probably thinks I am chasing shadows or something like that. That no, Said is indeed really actually not intending to sneer at people all the time, is not intending to summon up lots of social judgement, does not carry emnity in his heart, and is just writing comments, usually with disagreement, sometimes with approval. Maybe he feels some things while writing, sometimes those feelings are negative, but it’s not that there is some overarching complicated optimization process that tries to get most authors on LessWrong to leave and stop doing whatever they are doing. They are just comments, trying to point out flaws and correct the record. For reasons of local validity.
But man, I do just think that’s false, after almost a decade of thinking about it on and off, being in dozens of comment threads with Said, and spending many hundreds of hours on it. I link to many of the relevant posts where I think something more like “Said is trying to unilaterally enforce social norms of his own choosing while denying any such thing is going on, by making lots of comments that imply the people he doesn’t like are idiots, or are making elementary mistakes, or are being deceptive, or lack common sense, but shift and twist when pushed on” is going on.
I write about the pieces of this a lot in the post above. It IMO comes through quite a lot in comments like this:
or this:
Or:
Like, I think you can see the disdain in these comments. The disdain for almost everyone on LessWrong. Definitely disdain for me, and many others. And I think I could handle disdain fine, if it was carried openly, and could be argued with. But I don’t know how to argue with it. I don’t know how to take it as an object. Whenever I try to point at it, I get comments like this:[2]
And I at least don’t know how to deal with it. I find the experience of having this kind of dynamic present on LessWrong extremely costly. I know many authors have felt similarly. Maybe you wouldn’t find it costly. I currently don’t believe that you wouldn’t, and instead believe that you too would choose to ban Said from forums that you ran, after you ran into it a few times yourself, probably much faster than I did.
But maybe I am wrong. While I feel some temptation to try to restore some of my sanity by providing compelling demonstrations of what has felt like gaslighting to me, it is not the primary thing I am hoping to do with this post. The key thing is to have any clear moderation announcement at all, and to make it easy for other people to form their own judgement of how good of a job we are doing with moderation, and to explain some of the principles that will guide future moderation decisions, for the reasons I listed in the second part of this comment.
IDK, that’s a lot more words. Maybe they help communicate something. Maybe they again fail. In general thank you for many of your contributions to LessWrong and the community over the years.
With unclear sign, unfortunately, but like, we IMO are having a large effect about how the whole AGI thing is going in some way
Please forgive me not linking to all of these. I don’t have all the links handy, and searching for the text of the comment should be relatively fast if someone wants to look up the underlying comment.
I think your model of me as represented in this comment is pretty good and not worth further refining in detail.
I read something into those comments—I might even possibly call it “disdain”, but—“disdain (neutral)”, not “disdain (derogatory)”. It just… doesn’t bother me, that he writes in a way that communicates that feeling. It certainly bothers me less than when (for example) Eliezer Yudkowsky communicates disdain, purely as a stylistic matter. If I thought Said would want to be on my Discord server I would invite him and expect this to be fine. (Eliezer is on my Discord server, which is also usually fine.)
It bothers you. I’m not trying to argue you out of being bothered. I’m not trying to argue the complainants out of being bothered. It bothering you would, under the Modularity regime, be sufficient.
But you’re not doing that. You’re trying to make the case that you are objectively right to feel that way, that you have succeeded at a Sense Motive check to detect a pattern of emotions and intentions that are really there. I don’t agree with you, about that.
But I don’t have to. I don’t have your job. (I wouldn’t want it.)
I think the claim I’d make is not necessarily that Oli’s Sense Motive check has succeeded, but that Oli’s Sense Motive check correlates much better with other people’s Sense Motive checks than yours does, and that ultimately that’s what ends up mattering for the effects on discourse.
Like, in the sense that someone’s motives approximately only affect LessWrong by affecting the words that they write. So when we know the words they write, knowing their motives doesn’t give us any more information about how they’re going to affect LessWrong. For some people, there’s something like… “okay, if this person actually felt disdain then the words they write in future are likely to be _, and if not they’re likely to be _ instead; and we can probably even shift the distribution if we ask them hey we detect disdain from your comment, is that intended?”. But we don’t really have that uncertainty with Said. We know how he’s going to write, whether he feels disdain or not.
I am somewhat interested in his True Motives, but I don’t think they should be relevant to LW moderation.
(This is not intended to say “Said’s comments are just fine except that people detect disdain”.)
Makes sense. I think I probably could, with many more hours of examples and walking you through things, convince you of that. Maybe that’s worth it. Or maybe I’ll be a better writer in a few years and can get it across more easily. (Of course, you disagree, for if you did agree with that, you would probably agree with me now, conservation of expected evidence and all that)
Not planning to give it another try for now, though if you want me to try, I would do it. Just doesn’t seem, on the margin, the best use of either of our time.
I hate to be indelicate, but are you insane? It’s a goddamn web forum, not the ICC. The mods got complaints about a users’ behavior and they banned him. They can’t run a focus group to see how everybody feels about the situation first.
I mean, to be clear, I did have like 20+ hours of conversation with many authors and contributors who had very strong feelings on this topic just as part of writing this post[1], with many different disagreeing viewpoints, so I think we did a lot more than “run a focus group”.
Not to mention the many more conversations I’ve had over the last decade about this
Because Said is an important user who provides criticism/commentary across many years. This is not about some random new user, which is why there is a long post in the first place rather than him being silently banned.
Alicorn is raising a legitimate point. That it is easy to get complaints about a user who is critical of others, that we don’t have much information about the magnitude, and that it is far harder to get information about users who think his posts are useful.
LessWrong isn’t a democracy, but these are legitimate questions to ask because they are about what kind of culture (as Habryka talks about) LW is trying to create.
Said is a sufficiently mixed case that arguing with moderators on object level seems like a lost cause. My impression is that Said feeds both good and bad (but not terrible) norms, while making some positive object level contributions in a way that strongly annoys some people for essentially superficial reasons. And he’s being either persistently oblivious or illegibly principled in how that keeps happening, in ways that seem eminently avoidable by adjusting how he’s talking at a superficial level, without impacting any substantive points he’s making. (This is apart from the purely positive and important LW-related infrastructure contributions.) So the decision feels somewhat unjust, but not legibly unjust, and illegibly-possibly-unjust decisions must remain legitimate for practical reasons, so that the site admin retains steering superpowers.
In the Said/Duncan debacle, Duncan was making significant positive object level contributions while also feeding some catastrophically terrible norms (in my subjective opinion that’s hard to make legible; I applaud Zack for making some progress at the time, though that also didn’t gain much traction). So in terms of long term impact, the Said/Duncan case seemed much clearer to me, than the case of Said on his own, but also in both cases it’s not black and white and hard to argue.
Another salient example of a different kind is the whole class of cases of borderline domestic abuse (at sufficiently low levels of terribleness where it’s hard to make any calls, including the cases where the damage is purely psychological), where similarly there are often two kinds of impact with opposite signs that can’t be meaningfully directly compared to each other. (I’m bringing this up as a well-known example that illustrates the inherent complexities of any legible/illegible good/bad impact mixture, rather than as suggesting any further analogousness.)
And also anything about norms is not particularly legible in general, for example my intermittent discussion of norms and their feeding doesn’t seem to spark any meaningful engagement (even though it’s something I’ve been talking about for years), so until and unless it does, or I discover better anchors for framing it that someone else planted, it also feels like a lost cause to argue any specific policy proposals based in this kind of thinking.
Funnily enough I think I kind of feel about Duncan the same way Oli feels about Said. I detect a sinister and disquieting pattern in his writing that I cannot prove in a court of law or anything that is slightly larping as one. But I’m not trying to moderate any space he’s in.
(Maybe you misread what I intended about the Said/Duncan conflict? The wording in your comment seems a bit incongruous under the reading where you didn’t, sorry if I’m overthinking this. My point was that the outcome where Duncan mostly left was favorable in its longer term impact, at least so far, due to the norm influence that’s not necessarily even intended by Duncan himself, and so would be hard to argue or even know how to correctly attribute responsibility for. This is the largely illegible thing where Zack made a bit of a headway elucidating some aspects of it. But the impact of Said leaving seems unclear either way, while at the same time any issues seem more like the kind of thing he should’ve been able to fix and that’s therefore easier to attribute to his own decisions...)
(I appreciated Said’s comments on early woo stuff on this site, and I also appreciated the push back along the lines of, if you daily require a newly opened restaurant to show you a profit, you won’t ever see a profit.)
Reading through the litigation, I think the egregoric issue is the Voice of the People play. Public figures with public mandates get semi-possessed by what they imagine to be the shared soul of the movement, even as the “shared soul” seems to value, above all, inability to be possessed.
People get “manipulated”, by this person’s persistent and assertive and emotive comments, into defensive engagement—to the point that they are (not completely legibly) worried that this person’s judgment is internalized as a VOTP judgment. (“Statusing”). A very assertive and emotive (honest-emotive, not woo-emotive) person is always a candidate to being a VOTP, so people can get themselves manipulated by Schrodinger’s VOTP. In parallel, the issues this comment describes about the OP.
The rhetoric of the OP begins with a historic reference to a mandate (archive vs lw2). A mandate of course lays claim to a person’s time, energy, and identity, so there’s really no way to not be bound by it. At the same time I don’t think the “laudatory” point was really a joke. This person has had a lot built onto and out of his contributions. People will imitate a person’s writing style and not realize who they got it from or how significant that is.
Yeah there’s a strange blend in the OP behind the imperative tone (this is what the future holds) and the greater good tone. Imperative tone is more decisive. Maybe the OP believes the disdainful-critic-forward society is a local maximum that an actual democratic poll would vote for, against their own greater good.
It seems to me (in other cases) that this imperative tone often comes out when a mandate-holding VOTP is actually not sure, but (rightfully) appreciates that not being decisive will lead to being taken advantage of, with collapsing consequences for people they care about.
The apparent objective of the rhetoric (I think) is to hold together something very valuable, even with a clear view of potential forthcoming schism.
Yeah if there’s one takeaway I have from this it’s that this post isn’t long enough. Oliver should have ran a Said sequence, directly addressing your and Zack and Duncan’s perspectives one by one. It’s the only way to ensure due process.
I think this is actually correct in the sense that any legible general non-site-consensus considerations that led to this post having a nontrivial amount of content should’ve been standalone posts unrelated to the case of Said (perhaps on the topic of motivations for possible site policies in general), made sufficiently in advance to settle whatever impression they leave. And conversely, a post like this shouldn’t need to be long, or else it’s probably not making a good case from principles already established on the site.
I think case-law is generally better than legislative law, so I disagree that it is generally a good idea to establish precedent for complicated calls like this without any actual specific case to ground that precedent in. Having a concrete case that you abstract for the purpose of future decisions is just too useful (and also just has a strong track record as being a really important component of functional judicial systems).
And then beyond that, of course digging into the details of something like this is always going to be long. Documents and arguments produced in even minor lawsuits often span hundreds of pages, as it turns out reality just has a huge amount of detail. Even after you have established better principles, actually applying those principles still takes a lot of time and evidence.
I do think it would be good to have some of the above more factored out for the future, though IDK, case-law seems to work fine by just referencing past cases as the source of a principle. I did consider titling this post something like “LessWrong v. Said” to make it feel easier to cite in the future, but it ultimately felt too much like LARPing the legal system.
Case law did come to mind, the point is that any takeaways worth keeping should be able to stand on their own, without the inciting illustration. Their absence doesn’t necessarily make the case weak or premature, and the background dynamics of case law causes the general principles made salient in earlier cases to keep getting reexamined in new contexts. As a result, any principles that still stand much later would also stand on their own, abstracted from specific cases.
So for example there’s discussion of affective conflationary alliance attractors that doesn’t need any case law, and could just be referenced from this post (if there was a standalone post discussing it), making it shorter. This is the kind of refactoring I’m gesturing at in my above (distortionary) steelmanning of lc’s point.
Ah, yeah, I agree that stuff like the conflationary alliance attractor explanation sure seem like they can stand on their own, I was thinking more of stuff like the “What does this mean for the rest of us?” section.
I might end up making a standalone post with those concepts in a few weeks when things have settled down, and doing so won’t feel like re-opening this thread.
Please, don’t do this.
Your reasoning amounts to “we need to increase the punishment to compensate for all the false negatives”.
If the only kind of error that existed was false negatives, you might have a point. But it isn’t. False positives exist too. And crimes that are harder to catch are probably going to have more false positives. Harsher punishments also create bigger incentives for either false positives, or for standards that make everyone guilty of serious crimes all the time, thus letting anyone be punished at the whim of the moderators while pretending that they are not.
Agree that you need to account for false positives (and the above math didn’t do that)!
Sometimes crimes are harder to catch, but you can still prove they happened without much risk of false positives. I do sure agree that the kind of misbehavior discussed in this post is at risk of false positives, so taking that into account is quite important for finding the right punishment threshold. Generally appreciate the reminder of that.
Sometimes what makes a crime “harder to catch” is the risk of false positives. If you don’t consider someone to have “been caught” unless your confidence that they did the crime is very high, then, so long as you’re calibrated, your false positive rate is very low. But holding off on punishment in cases where you do not have very high confidence might mean that, for most instances where someone commits the crime, they are not punished.
Heads-up: I am nearing the limit of the roughly 10 hours I set aside for engaging on this, so I’ll probably stop responding to things soon (and also if someone otherwise wants to open up this topic again in e.g. a top-level post, I’ll probably just link back to the discussion that has been had here, and not engage further).
Ok, I think that’s a wrap for me. Thanks all for the discussion so far. I am now hoping to get back to all the other work I am terribly behind on.
My two cents. There’s a certain kind of posts on LW that to me feel almost painfully anti-rational. I don’t want to name names, but such posts often get highly upvoted. Said was one of very few people willing to vocally disagree with such posts. As such, he was a voice for a larger and less vocal set of people, including me. Essentially, from now on it will be harder to disagree with bullshit on LW—because the example is gone, and you know that if you disagree too hard, you might become another example. So I’m not happy to see him kicked out, at all.
My thoughts are similar to yours although I’m more willing to tolerate posts that you call “almost painfully anti-rational” (while still wishing Said was around to push back hard on them). I think in the early stages of genuine intellectual progress, it may be hard to distinguish real progress from “bullshit”. I would say that people (e.g. authors of such posts) are overly confident about their own favorite ideas, rather than that the posts are clearly bullshit and should not have appeared. My sense is that it would be a bad idea to get rid of such overconfidence completely because intellectual progress is a public good and it would be harder to motivate people to work on some approach if they weren’t irrationally optimistic about it, but equally bad or worse if there was little harsh or sustained criticism to make clear that at least some people think there are serious problems with their ideas.
FWIW my personal intention—only time will tell whether I actually stick to it—is to be a little more vigorous in disagreeing with things that I think likely to be anti-rational, precisely because Said will no longer be doing it.
(Just a brief note that I accidentally left a thumbs-down react on this comment of gjm’s for some period in the last hour. I had no intention to, I like/support gjm’s intention, I have been working on the reacts code lately so that probably led me to accidentally leave one on the live site.)
Bad call. You don’t exactly have an unlimited supply of people who have a solid handle on the formative LW mindset and principles from 15 years ago and who are still actively participating on the forums, and latter-day LessWrong doesn’t have as much of a coherent and valuable identity to stand firmly on its own.
A key idea in the mindset that started LessWrong is that people can be wrong. Being wrong can exist as an abstract thing to begin with, it’s not just an euphemism for poor political positioning. And people in positions of authority can be wrong. Kind, well-meaning, likable people can be wrong. People who have considerate friendly conversations that are a joy to moderate can be wrong. It’s not always easy to figure out right and wrong, but it is possible, and it’s not always socially harmonious to point it out loud, but it used to be considered virtuous still.
A forum that has principles in its culture is going to have cases where moderation is annoying around something or someone who doggedly sticks to those principles. It’s then a decision for the moderators whether they want to work to keep the forum’s principles alive or to have a slightly easier time moderating in the future.
the supply sure has a lot more people who still exist and don’t come here much anymore, and I’m hopeful we’ll see some coming back now. Hey Duncan, (edit: although, if duncan does come back, I won’t go easy on his ideas, I’ll just try to respect his time about it. “I think you’re wrong, here’s why, (if true) I appreciate your effort sharing the claims, (if true) I won’t go many rounds unless it seems to be leading us to an insight”
I’m pretty sure people drifted away because of a more complex set of dynamics and incentives than “Said might comment on their posts” and I don’t expect to see much of a reversal.
I think in my whole life I have once seen a person come back because another person left, and they didn’t stay long anyway. Broadly speaking I don’t think this ever works.
Fwiw, my interaction with lw and more broadly the rationalist scene in the Bay area was most of what formed my current stance that communities that I want to participate in operate on white lists, not black lists. This is such a fundamental shift that it affects everything about how I socialize, and made my life much better. Banning someone requiring a post of this effort level predicts that lots and lots of other good things aren’t happening, and that is mostly invisible.
FWIW I don’t think of LW as a ‘community’ in any strong sense. Most people here won’t be at your wedding or your funeral or pick you up from the airport if you’re traveling.
The connection is in the deciding whether or not to regularly participate. Said didn’t affect my decision here that much, but I’m way way above average in ability to dismiss criticism that I feel is non central.
Good work.
The hardest part of moderation is the need to take action in cases where someone is consistently doing something that imposes a disproportionate burden on the community and the moderators, but which is difficult to explain to a third party unambiguously.
Moderators have to be empowered to make such decisions, even if they can’t perfectly justify them. The alternative is a moderation structure captured by proceduralism, which is predictably exploitable by bad actors.
That said — this is Less Wrong, so there will always be a nitpick — I do think people need to grow a thicker skin. I have so many friends who have valuable things to say, but never post on LW due to a feeling of intimidation. The cure for this is, IMO, not moderating the level of meanness of the commentariat, but encouraging people to learn to regulate their emotions in response to criticism. However, at the margins, clipping off the most uncharitable commenters is doubtless valuable.
Like seemingly many others, I found Said a mix of “frequently incredibly annoying, seemingly blind to things that are clear to others, poorly calibrated with the confidence level he expresses things, occasionally saying obviously false things[1]” and “occasionally pointing out the-Emperor-has-no-clothes in ways that are valuable and few other people seem to do”.
(I had banned him from my personal posts, but not from my frontpaged posts.)
And I wish we could get the good without the bad. It sure seems like that should be possible. But in practice it doesn’t seem to exist much?
I have occasionally noticed in myself that I want to give some criticism; I could choose to put little effort in but then it would be adversarial in a way I dislike, or I could choose to put a bunch of effort in to make a better-by-my-lights comment, or I could just say nothing; and I say nothing.
I think this is less of a loss than I think Said thinks it is. (At least as a pattern. I don’t know if Said has much opinion about my comments in specific.) But I do think it’s a bit of a loss. I think it’s plausible that a version of me who was more willing to be disagreeable and adversarial would have left some valuable comments that in fact never got written.
(But also, it’s plausible that that version of me wrote fewer of my actually-good comments; and that some of the additional comments he wrote turned out to be crap; and that his refusal to put in effort in some cases lead to him learning less.)
So is this just, like, a personality dial? Where you only get the EHNC comments if it’s turned so far over in one direction that you also get the other stuff? Idk, doesn’t seem like that should be the case. Apart from anything else, “a version of Said who has very similar personality but is, like, less wrong about stuff” would IMO be a big improvement. (But maybe it’s harder to become less wrong with the dial set over there? Still, I dunno, doesn’t feel quite right.)
But for whatever reason, it does seem like the good thing Said was providing is rare, and I’m sad about losing it.
On net I’m pretty sure I agree with the ban. And I strongly appreciate the amount of thoughtfulness put into the decision and this post.
For honesty’s sake I should admit the example I had in mind when I wrote that was a bit less obvious than I’d remembered. Said: “X is strictly superior to Y.” Me: “no it’s not for reasons A, B.” Said: “So just do Z, come on, this problem has been solved for decades.” Me: “Still has A, and only a partial solution to B because of C.” Said: (No reply.)
It’s maybe not obvious that X is not strictly superior; and while I do think it’s obvious that Z still has problem A, Said admittedly never outright says it doesn’t… but like, still. This comment thread by itself isn’t a big deal of course, but I don’t think it’s particularly out of distribution for Said.
I don’t spend enough time in the LW comments to have any idea who Said is or to be very invested in the decision here. I think I agree with the broad picture here, and certainly with the idea that an author is under no obligation to respond to comments, whether because the author finds the comments unhelpful or overly time consuming or for whatever other reason. That said, I am mostly commenting here to register my disagreement with the idea of giving post authors any kind of moderating privileges on their posts. That just seems like an obviously terrible idea from an epistemic perspective. Just because a post author doesn’t find a comment productive doesn’t mean someone else won’t get something out of it, and allowing an author to censor comments therefor destroys value. LW is the last site I would have expected to allow such a thing.
I think ultimately someone needs to do the job of moderation, and in as much as we want to allow for something like an archipelago of cultures, the LW moderation team really can’t do all the moderation necessary to make such things possible.
Note that there are a bunch of restrictions on author moderation:
The threshold for getting the ability to moderate frontpage posts is quite high (2,000 karma)
The /moderation page allows you to find any deleted comments, or users banned from other’s posts
We watch author moderation quite closely and would both change the rules, and limit the ability of an individual to moderate their posts if they abuse it
In general, I am not a huge fan of calling all deletion censorship. You are always welcome to make a new top-level post or shortform with your critique or comments. The general thing to avoid is to not always force everyone into the same room, so to speak.
I do think an alternative is for the LW team to do a lot more moderation, and more opinionated moderation, but I think this is overall worse (both because it’s a huge amount of work, and because it centralizes the risk so that if we end up messing up or being really dumb about something, then now a perspective gets fully excluded from the site, instead of just some well-defined subset of it). I don’t think at least currently voting alone does enough to make for functional discussion spaces.
This is the mentioned comment thread under which Said can comment for the next two weeks. Anyone can ask questions here if you want Said to have the ability to respond.
Said, feel free to ask questions of commenters or of me here (and if you want to send me some statement of less than 3,000 words, I can add it to the body of the post, and link to it from the top).
(I will personally try to limit my engagement with the comments of this post to less than 10 hours, so please forgive if I stop engaging at some point, I just really have a lot of stuff to get to)
Edit: And the two weeks are over.[1]
I decided to not actually check the “ban” flag on Said’s account, on account of trusting him to not post and vote under his account, and this allowing him to keep accessing any drafts he has on his account, and other things that might benefit from being able to stay logged in.
I am, of course, ambivalent about harshly criticizing a post which is so laudatory toward me.[1] Nevertheless, I must say that, judging by the standards according to which LessWrong posts are (or, at any rate, ought to be) judged, this post is not a very good one.
The post is very long. The length may be justified by the subject matter; unfortunately, it also helps to hide the post’s shortcomings, as there is a tendency among readers to skim, and while skimming to assume that the skimmed-over parts say basically what they seem to, argue coherently for what they promise to argue for, do not commit any egregious offenses against good epistemics, etc. Regrettably, those assumptions fail to hold for many parts of the post, which contains a great deal of sloppy argumentation, tendentious characterizations, attempts to sneak in connotations via word choice and phrasing, and many other improprieties.
The problems begin in the very first paragraph:
This phrasing assumes that there’s something to “understand” (and which I do not understand), and something which I should wish to “learn” (and which I have failed, or have not tried, to learn). This, of course, begs the question. The unambiguous reality is that I have disagreements with the LW moderation team about various things (including, as is critical here, various questions about what are proper rules, norms, and practices for a discussion forum like this one).
Of course, phrasing it in this neutral way, although it would be unimpeachably accurate, would not afford @habryka the chance to take the moral high ground. In a disagreement, after all, one side may be right, or the other; or both could be wrong. One must argue for one’s own side.
But by describing the situation as one in which he has some (presumptively correct) understanding, which remains only for him to impart to me, and some (presumptively useful) skill, which remains only for me to learn, @habryka attempts to sidestep the need to make his case.
Please note that this is not a demand that said case be made in this post itself (nor even that it be summarized, if previously made… although a hyperlink would not be amiss here—if indeed there’s anything to link to!). I am simply saying that an honest account would only say: “I have had disagreements with Said; we have discussed, debated, argued; I remain convinced of my view’s correctness”. It would not try to sneak in the presumption that there’s some failure to understand on my part, and only on my part.
(After all, I too can say: “For roughly 7 years, I have spent many hours trying to get Oliver Habryka to understand and learn how to run a discussion forum properly by my lights.” Would this not sound absurd? Would he not object to this formulation? And rightly so…)
Of course, the truth of this claim hinges on how many is “few”. Less than 10? Less than 100? Less than 1,000? Still, intuitively it seems like an outlandish claim. If you ask a hundred people, randomly selected out of all those who are familiar with LessWrong, to name those people who have been important to the site’s culture, how many of them will even recall my name at all, to say nothing of naming me in answer to the question? If the number exceeded the single digits, I would be flattered… but it seems unlikely.
This claim has been made before. When investigated, it has turned out to be dubious, at best. (The linked comment describes two cases where some “top author” is described by @habryka as having this sort of view, and the reality turns out to be… not really that. I would add the case of @Benquo as well, where failing to mention this comment—which was written after the discussions cited later in this post—constitutes severe dishonesty.)
We have, to my knowledge, had zero examples of this sort of claim (“top author X cites Said as a top reason for why they do not want to post or comment on LW”) turning out to just be straightforwardly true.
This is important on its own, but it’s also important for the purposes of evaluating any other claims, made by @habryka, that are based on purported information which is available to him (e.g. in his capacity as LW administrator), but which are not readily verifiable. For example:
I expect that whatever impression is formed in a typical reader’s mind upon reading this line, the reality is something far less impressive, where my comment(s) turn out to play a far less significant role. (Again, this supposition is not a vague denial, but rather is based on @habryka’s aforementioned record w.r.t. describing other people’s views about me.)
It seems remiss not to note that the ensuing discussion thread contained over a dozen more comments from me, which together come to almost 6,000 words, and in which I explain my reasoning at length (and several of which are highly upvoted). (This is counting only comments on the object level, i.e. elaborating on my top-level comment; I am not counting the comments in the “meta” subthread started by Ben Pace.) To say that I “refuse[d] to do much cognitive labor in the rest of the thread” is, quite frankly, implausible.
(Were I more inclined to play fast and loose with connotations, I could say that I was trying to get my interlocutors to understand my position, but failed…)
The passive voice is inappropriate here. Those 100+ comment threads are, invariably, started and kept going largely by the LW moderators. (If you doubt this, I invite you to check.)
(Emphasis mine.)
That’s the key, isn’t it? In your math-department scenario, the bad critic is asking questions that are easily answered. But is this the case for questions that I ask on Less Wrong?
Here’s an exercise: look up all of my comments that are some version of “Examples?”, and count how many of them were “easily answered” (i.e., by the post/comment author, or someone else, readily rattling off a list of examples of whatever it is).
Before trying this, what would you predict the percentage will be? 100%? 50%? 10%?
If it turns out that I’m not asking questions that are “easily answered”, then the analogy fails to hold, and the argument has no force.
You link this comment in a footnote, but that is not enough; the fact is that your characterization of my view here is deeply misleading. (Indeed, I have argued in favor of an ignore system for LessWrong—an argument to which you were entirely un-receptive!)
As far as I can tell, the linked comment does not, in fact, ask me to do or not do anything. Most of the comment lays out various bits of reasoning about discussion norms and such. Then there’s this bit:
That’s really the only concrete part of the comment. As you can see, it asks nothing of me—certainly nothing to do with “stop implying obligations to authors”.
How could I have refused the mod team’s request, when no request was made? (And if I did “reject… [something] as a thing [I] was doing or a coherent ask”, why not link to the comment or comments where this rejection was expressed?)
This is quite a tendentious characterization of a comment thread where I only express and argue for my views, without at any point calling for anyone to do anything, encouraging anyone to do anything, etc. If I called for “authors to face censure”, the obvious questions are—what censure? In what form, from whom, how? But if one tries to find the answers to these questions (by clicking on the link, perhaps), it turns out to be impossible, because… the alleged calls for “authors to face censure” never took place.
What option, indeed? Well, except for options like implementing a robust ignore system for LessWrong (the UX design of which I would be happy to help with); or creating “subreddits” with various degrees of expected rigor (akin to the “Main” vs. “Discussion” distinction on the old LessWrong—perhaps by expanding the role of the Shortform feature, and adding some UX affordances?); or making explicit rules forbidding certain sorts of comments; or any number of other possibilities…[2]
There is, of course, a sense in which this entire comment is an exercise in pointlessness. After all, I hardly expect that @habryka might read my commentary, think “you know, he’s right; my arguments are bad”, and reverse his decision. (Given his position as LessWrong’s admin, it is not as if he needs to justify his banning decisions in the first place!)
Still, there was, presumably, some purpose to writing this post—some goal ostensibly served by it. Whatever that goal might be, to the extent that it is well-served by a post as deeply flawed as this one, I oppose it. And if the goal is a worthy one, then the inaccuracies, misleading statements, tendentious characterizations, and other epistemic and rhetorical misdeeds with which the post is rife, can only be detrimental to it.
Although I must note that I cannot, in good conscience, accept all of the praise which the post heaps on me. (More on that later.)
I leave off such obviously outlandish and improbable suggestions as “encouraging authors to reply to questions and criticisms of their posts by answering the questions and addressing the criticisms”.
FWIW, this seems to me like a totally fine sentence. The “by my lights” at the end is indeed communicating the exact thing you are asking for here, trying to distinguish between a claim of obvious correctness, and a personal judgement.
Feel free to summarize things like this in the future, I would not object.
It of course depends on how active someone on LessWrong is (you are not as widely known as Eliezer or Scott, of course). My modal guess would be that you would be around place 20 in how people would bring up your name. I think this would be an underestimate of your effect on the culture. If someone else thinks this is implausible, I would be happy to operationalize, find someone to arbitrate, and then bet on it.
You are here responding to a sentence from a previous draft of the post. My guess is you want to edit.
I mean, I have a whole section of this post where I am making explicit rules forbidding certain sorts of comments. That’s what the precedent section is about. Of course, you are likely to disagree that those qualify as appropriate rules, or good rules, but that’s what got us to this point.
Already edited by the time you posted the comment.
Quickly responding to this: The OP directly links to 2 authors who have made statements to this effect on LessWrong itself, and one author who while saying it wasn’t a major reason for leaving, still was obviously pretty upset (Benquo)[1]. I wouldn’t consider Gordon a “top author” but would consider Duncan and Benquo to be ones. There are more, though I have fewer links handy.
I wasn’t able to find an easily extractable quote from Duncan, though I am sure he would be happy to provide affirmation of his position on this when he reads this, and readers can form their own judgement reading this thread.
We also have someone like DirectedEvolution saying this:
I also don’t have a public quote by EVN handy, though I am sure she would also be happy to attest to something close to this.
I don’t have as many receipts as I would like to be able to share here, but saying there are “zero examples” is just really straightforwardly false. You were even involved in a big moderation dispute with one of them!
While you link to a comment where he says some more positive things about you 7 years ago, I quote from his most recent overall summary in the OP, where to be clear, he was not overall in favor of banning you, though really did not have a positive impression.
Who is “EVN”…?
Elizabeth (Van Nostrand).
I… actually can’t figure out what you’re referring to, here. Could you quote the part of the OP which you have in mind?
Duncan and Gordon.
…?
But you just said that you don’t consider Gordon a “top author”, and you can’t find a quote from Duncan saying anything like this?
So it is in fact straightforwardly true to say that there are zero examples of “top author X cites Said as a top reason for why they do not want to post or comment on LW” turning out to just be straightforwardly true.
If you get people to post new things, then this may change. But what I wrote seems to me to be entirely correct.
No, at the very least it’s Duncan? That’s literally the text of my comment (though slightly circuitously).
I didn’t say I couldn’t find any quote, I said I couldn’t find any easily extractable quote. The relevant thread contains plenty of multi-paragraph sections that make this position of his quite clear, just nothing that happened to be easy to easily removed from context.
Edit: Ok, fine, after spending 20 more minutes on reading through random old threads, here is a pretty clear and extractable comment from Duncan (it really was also otherwise very obvious from the link I provided, but due to some of the indirect nature of the discussion was hard to quote):
This sure seems like an example of a top author citing you directly as the reason for not wanting to post on LW.
Yep, that is definitely one example, so the count now stands at one example.
I’m having trouble modeling you here Said. When you wrote there were zero examples, what odds would you have put that nobody would be able to produce a quote of anyone saying something like this? What odds would you currently put that nobody can produce a similar quote from a second such author?
You say “the count now stands at one example” as though it’s new information. Duncan in particular seems hard to have missed. I’m trying to work out why you didn’t think that counted. Maybe you forgot about him saying that? Maybe it has to be directly quoted in this thread?
I’ve already explained this multiple times, but sure, I’ll explain it again:
If someone says “X has happened a bunch of times”, and you say “Examples please?”, and they say “here are examples A, B, and C”; and you look at A, and it turns out to not be X; and you look at B, and it turns out to not be X; and you look at C, and it turns out to not be X; and you say “… none of those things are X, though?”; and your intelocutor continues to insist that “X has happened a bunch of times”…
… what is the correct position for you to take, at that point?
It is, honestly, quite distressing, how many times I have had to explain this, not just in this context but in many others: if someone makes a claim, and when asked for examples of that claim provides things that turn out not to actually be examples, then not only does their claim remain totally unsupported at that point, but also, the fact that this person thought that the given things were examples of their claim, when they actually were not—the fact that they made this error—should cause you to doubt their ability to recognize what is and is not an example of their claim, in general.
As I have written before:
(The other possibility, of course, is that the claimant was simply lying, in which case you should integrate that into your assessment of them.)
Pretty high. If such a quote were available, it would have been produced already. That it has not been, is not for lack of trying, it seems to me.
I do not keep in my head the specifics of every comment written in every conversation on LessWrong that involves me. I recalled the conversation in vague terms, but given @habryka’s track record on this subject, I expected that there was a good chance that he was misrepresenting what Duncan had said, in the same way that he misrepresented what several other authors had said. That turned out not to have been the case, of course, but the expectation was valid, given the information available at that time.
I mean, I literally already provided a quote quite close to what you desire for DirectedEvolution (is his wording as perfectly of an exact match as Duncan’s, no, but I think it is close enough to count). To remind you, the quote is:
Now we can argue about DirectedEvolution as a “top author”. I personally think he is a pretty good commenter and potentially deserving of that title.
I really haven’t tried to produce many quotes, because those quotes have little bearing on my overall bottom-line on this situation. I have enough inside-view model of this situation to cause me to make the same decision even if no top author had complained about you, and you will find that I put little emphasis in the top post on something like “the number of complaints I have gotten about you”.
But sure, here is another one, if you really want to go out on a limb and predict that no such quotes exist (this time from Lukas Gloor who I do consider a top author):[1]
And here, though of course it’s another correlated piece of evidence, is Ray’s summary of his epistemic state two years ago, which I agree isn’t a direct quote, but at least shows that Ray would also have to be totally making things up for your accusations to check out:
If you want another piece of evidence, a quick look at the /moderation page reveals that you are by a wide margin the most frequently banned user on LessWrong:
Elizabeth writes in a deletion reason for one of your comments:
Is that maximally clear? No. But again nobody here ever claimed there are public receipts for all of this.
(I should have disengaged earlier, but since you seem to insist the history of complaints about you is made up, I figured I would comment with some more things that aren’t private communication and I can easily share)
Note that he importantly also says:
This also roughly aligns with the period where I thought Said was behaving somewhat better (until it got worse again in the past few months, precipitating this ban). Maybe Lukas agrees, or not. The comment itself nevertheless seems clear.
Indeed, we certainly can argue about that. If he’s a “top author” but Gordon isn’t (as you have said), then your concept of “top author” is incoherent.
Absolutely, hilariously false. Your own words, from the OP:
This emphasis is absolutely not something which you can credibly disclaim.
… surely you jest? I have nothing at all against the guy, but he’s written five posts, ever, in 13 years of being a LessWrong member. How does he qualify as a “top author”, but not Gordon?
By the standards implied by these categorizations, it would seem that I must also be a “top author”!
You know perfectly well how little this sort of thing is worth. Yes, it’s correlated evidence. And it’s another report of more alleged private communications. Any way to verify them? Nope. Any way to check whether some or most or all of them are being mis-remembered, mis-characterized, mis-summarized, etc.? Nope.
Of course Ray would not have to be “totally making things up”, just like you have not been “totally making things up”—that is obviously a strawman! You weren’t “totally making up” the examples of Jacob Falkovich, Scott Alexander, etc.—your reporting of the relevant facts was just severely skewed, filtered, etc. Why the same cannot be true for Ray, I really can’t see.
Whether I “want another piece of evidence” is immaterial to the question, which is whether the already-claimed evidence in fact exists and in fact is as described. Introducing more pieces of other evidence has no bearing on that.
Elizabeth is (was? I’m not sure where to even find the most up to date version of this info, actually) a LessWrong moderator. This obviously disqualifies her opinion about this from consideration.
Just want to note he has many many long and thoughtful high-karma comments, and I value good commenters highly as well as good posters.
Oh? But then I must be even more of a “top author”, yes? (I also have “many many long and thoughtful high-karma comments”, after all; in approximately as many years of being an LW member, I’ve accumulated about five times as much karma as Lukas has!)
And what of Gordon, of whom @habryka has said that he is not a “top author”—but he, too, seems to have “many many long and thoughtful high-karma comments”?
This standard of who is and is not a “top author” seems awfully fluid, I must say…
I mean, you are not by my lights, as we have just banned you. But certainly not for lack of participation.
Lukas has written 700 comments, and has ~4,000 karma. I also happen to quite like a lot of his comments. Writing posts is not a requirement to be a top author on this site, by my lights.
No, I can credibly disclaim it, because what you are quoting is a single half-sentence, in a footnote of a 15,000 word post. That is of course absolutely compatible with it not being emphasized much!
How could it have been mentioned at all without being emphasized less? I guess it could have been in a parenthetical in addition to being in a footnote, but clearly you are not going to put the line there. By the same logic, our policy that we might delete content that doxxes people could not be characterized as having little emphasis in the post, given that I also mention that offhand in a footnote, and in that case it’s even a full sentence with its own footnote!
So a “top author” means… what exactly? Just your own personal opinion of someone?
I have written over 4,500 comments, and have ~17,000 karma. Gordon has written over 2,700 comments, and has ~10,000 karma.
And yet this is not enough to make either of us “top authors”, it seems. So why is Lukas’s much lower comment count and much lower karma total sufficient to make him a “top author”? It would seem that writing any particular number of posts, or comments, or having any particular amount of karma, is neither necessary nor sufficient for being a “top author” on this site! Very strange!
Ah, yes, I almost forgot—you “happen to quite like a lot of his comments”. So it does seem to come down to just your own personal opinion. Hm.
Yes, of course it isn’t. Eugine Nier isnt’ a “top author”. Neither is David Gerard. Of course karma, or volume of comments or posts is not sufficient. This sounds about as deranged as showing up in court of law and saying “oh, so neither dollars in my bank account, nor my grades in high-school are sufficient to establish whether I am guilty of this crime you accuse me off? Very strange! Very suspicious!”. Of course they aren’t!
Then why did you cite Lukas’s comment count and karma value?
And I ask again: what qualifies someone as a “top author”? Is it just your own personal opinion of someone?
Yeah, approximately. Like, I could go into detail on my model of what I think would cause someone to be qualified as a “top author”, but that really doesn’t seem very helpful at this point. I didn’t have any particularly narrow or specific definition in mind when I used these very normal words that readers would not generally assume have hyper-specific definitions the same way I use all words. In this case, it means something roughly like “author I consider in the top 50 or 100 active authors on the site in terms of how much they contribute positively to the site”.
Oh, certainly readers wouldn’t assume any such thing. But you are (yet again!) strawmanning—who said anything about “hyper-specific” definitions?
But one thing that most readers would assume, I am quite sure, is that you have some objective characteristics in mind, something other than just whether you like someone (or even “how much they contribute positively to the site”, which is naught but meaningless “vibes”).
For example, they might assume that “top author” meant something like “top in post karma or popularity or being cited or being linked to or their posts being evaluated for quality somehow in some at least semi-legible way”. They might assume that “who are the top authors on LW” would be a question that would be answerable by looking at some sort of data somewhere, even if it’s hard to collect or involves subjective judgments (such as reviews, ratings, upvotes, etc.). They might assume, in short, that “who are the top authors on LW” is a question with an intersubjectively meaningful answer.
I am quite sure that they would not assume the question to be one that is answerable only by the method of “literally just ask Oliver Habryka, because there is no other way of answering it and it is not meaningful in any other way whatsoever”.
I took “top author” to mean something like “person whose writing’s overall influence on LW has been one of the most positive”. I would not expect that to be equivalent to anything mechanically quantifiable (e.g., any combination of karma, upvotes, number of links, number of comments, proportion of replies classified as positive-sentiment by an LLM, etc.), though I would expect various quantifiable things to correlate quite well with it. I would not take it to mean “person whom Oliver Habryka likes” but I would expect that Oliver’s judgement of who is and isn’t a “top author” to be somewhat opaque and not to come down to some clear-cut precisely-stated criterion. I would not expect it to mean something objective; I would expect it to be somewhat intersubjective, in that I would e.g. expect a lot of commonality between different LW participants’ assessment of who is and who isn’t a “top author”.
There is a lot of space between “completely meaningless, nothing but vibes, just Oliver’s opinion” and “answerable by looking at some sort of data somewhere”. I would take “top author” to live somewhere in that space, and my guess (for which I have no concrete evidence to offer, any more than you apparently do for what you are “quite sure most readers would assume”) is that the majority of LW readers would broadly agree with me about this.
This is hard to believe. It doesn’t seem to match how people use words. If you asked 100 randomly selected people what the phrase “top authors” means, how many do you think would come up with something about “overall influence on [something] has been one of the most positive”? It’s a highly unnatural way of ranking such things.
And yet it clearly does mean exactly that.
No, I really don’t think that there is.
Well, right now my comment saying what I think “top author” means to most LW readers is on +12/+4 while yours saying what you think it means to most readers is on −18/-10. LW karma is a pretty poor measure of quality, but it does give some indication of what LW readers think, no?
And no, it does not clearly mean “person whom Oliver Habryka likes”. You can get it to mean that if you assume that all subjective evaluations collapse into “liking”. I do not make that assumption, and I don’t think you should either.
Don’t be ridiculous. Of course it doesn’t give any indication. My comment is that low because of two LW mods strong-downvoting it. That’s literally, precisely the reason: two strength-10 downvotes, from the mods. This says nothing about what “LW readers” think.
Almost every single one of my comments under this post has been getting strong downvotes from at least one mod. Judging what “LW readers” think on this basis is obviously absurd.
(I didn’t agree-vote on either gjm’s comment or your comment, FWIW. I did downvote yours, because it does seem like a pretty bad comment, but it isn’t skewing any agreement votes)
I was going to type a longer comment for the people who are observing this interaction, but I think the phrase “case in point” is superior to what I originally drafted.
I confirm that my understanding of top author was close to what Said describes here.
You also provide an appendix of previous moderation decisions, which you offer as background and support for your decision. A quote from that appendix:
And, at the beginning of the post—not in an appendix, not in a footnote, but in the main post body:
This, again, is about users’ complaints, and the number and distribution thereof.
You seem unable to conceive that the complaints aren’t the primary thing going wrong, but merely a sign of it. In-principle, there could be a user on a web forum that generated many complaints, where Habryka and I thought the complaints baseless. The mere presence of complaints is not necessary or sufficient to want to ban someone; in this case it is relevant evidence that your energy-sucking and unproductive comments have become widespread, and it is a further concerning sign that you are the extremal source of complaints, well worth mentioning as context for the ban.
As has often been the case, you will not understand the position or perspective of the person you’re in a comment section with, and obtusely call their position ridiculous and laughable at length; I have come to anticipate that threads with you are an utter waste of my time as a commenter and other people’s time as readers, and this thread has served as another such example.
Uh… yeah, of course the complaints aren’t the primary thing going wrong.
Why would you think that I “seem unable to conceive” of this? This is really a very strange reply.
The OP uses the complaints as an illustration of the supposed problem, and as evidence for said supposed problem.
If the alleged evidence is poor, then the claim that the supposed problem exists is correspondingly undermined.
Is this not obvious?
That’s a thread you’re pulling on. But as part of it, you wrote:
Note you didn’t simply question Habryka, when he said he didn’t put a ton of emphasis on the number of complaints, rather you did a strong status-lowering move of claiming his claims were laughable and ‘absolutely’ false. Yet in the whole 15,000 word post he mentions it in a single footnote, and furthermore (as I just explained) it wasn’t central to why the ban is taking place, which is why this single mention is indeed ‘little emphasis’. So I expect you will of course be very embarrassed and acknowledge your mistake in attempting to lower his status through writing that his claim was laughable, when it was true.
Or, like, I would expect that from a person who could participate in productive discourse. Not you! And this is another example of why you won’t be around these parts no more, the combination of saying obviously false things and attempting to lower people’s status for saying obviously true things and embarrass them.
Yadda yadda, you don’t understand how I could possibly see this in anything you wrote, you claim there is no implicit status dimension in your comments, you ask a bunch of questions, say my perspective is worthy of no respect and perhaps even cast aspersions on my motivations, hurrah, another successful Said Achmiz thread. I hope to have saved you the need to write the next step of this boring dance.
What’s to question? The post is the post. We can all read it. On the subject of “what is actually in the post”, what question can there be?
This, as I have already pointed out, is not true.
This also does not seem like a credible claim, as I’ve argued. I have seen no good reasons to change this view.
It was not true.
It was true.
(I admit a slight imprecision when I wrote it was mentioned only once; Habryka also mentioned it once in an appendix and also mentioned that people had many complaints about the culture which he believes source from you. This was “little emphasis” relative to all the analysis of sneer culture and asymmetric effort ratios and so on.)
And praise! It was a setup and explanation symmetric in complaint and praise!
I kinda wish the subsequent back and forth between you and Habryka and Ben hadn’t happened yet downthread here, because I was hoping to elicit a more specific set of odds (is “pretty high” 75%? 90%? 99%?) and see if you wanted to bet.
I can sympathize with the feeling where it seems an interlocutor says false things so often if they said it was sunny outside I’d bring an umbrella. I also haven’t been tracking every conversation on LessWrong that involves you, but that said even in a world where Habryka was entirely uncorrelated with truth I’d have remembered the big moderation post about the two of you and guessed Duncan at least would have said something along those lines.
No, the count already stood at at least one example. The citing had already been there, you just for some reason asked me to waste 20 minutes of my life finding a quote that was easier to extract than the reference to the discussion section that already sufficiently demonstrated this point (a quote which you very likely already knew about when you wrote this comment because you were literally the direct recipient of this comment and responded to it).
Neither of us for any second had any doubt that we could find a Duncan comment to this effect. What the point of the exercise of denying its existence was is beyond me.
I’ll explain, then.
In general, in matters of public interest, that take place in the public eye, claims that concern facts of relevance to the matter under discussion or dispute ought not to be taken on anyone’s word. “Just trust me, bro” is not an acceptable standard of evidence, in any serious matter. This is the case even if (a) the claim is true, (b) the one who demands the evidence personally knows that it’s true.
When the moderator or administrator of a forum/community makes some claim about some dispute or some individual member who has some connection to the dispute, that claim ought to be trusted even less than claims normally are, and held to a higher standard of evidence. (In general, those who wield authority must be held to a higher standard of evidence. Epistemic lenience toward those who have power is both epistemically irrational and ethically improper—the former, because in such situations, the powerful often have a great incentive to mislead; the latter, because lenience in such cases serves the interests of those who misuse their power.)
And you, personally, have shown a remarkable[1] willingness, on this subject, to
liewrite in deeply misleading ways, misrepresent and distort the facts, describe and characterize events and situations in ways that create inaccurate impressions in naïve readers, and otherwise communicate in unprincipled and deceptive ways. (Examples: one two three.)So when you—the administrator of LessWrong, writing about a purported fact which is highly relevant to a moderation dispute on LW—claim that a thing is true, the proper response is to say “prove it”. This is especially so, given that you, personally, have a singularly unimpressive track record of honesty when making claims like this.
P.S.: I will add that “denying its existence” is—as seems to be par for the course in this discussion—an inaccurate gloss.
And quite surprising, too, at least to me. I really would not have expected it. Perhaps this simply speaks ill of my ability to judge character.
Look, the relevant comment was literally a reply to you. You knew what Duncan thought on this topic.
Maybe you forgot, we don’t have perfect memory, but I don’t buy that what is going on is not that you saw an opportunity to object to a thing that you approximately knew was correct because maybe I would fail to find an easy-to-quote excerpt from Duncan, or maybe you literally hoped to just waste a bit more of my time, or successfully cause me to end up frustrated and embarrass myself in some way.
Like, yes, asking for receipts seems fine, but that’s different from insisting on receipts in a perfect format. The appropriate thing to do when you make a claim like this is to put in some amount of symmetric effort yourself in finding appropriate quotes, or providing your own reasonable summaries of the external evidence, instead of playing games where you claim that “there are no instances where X turns out to be straightforwardly true”, when like, you yourself were the direct recipient of a comment that said that exact thing, and I had already linked to the post where that comment was made, and where the overall point was obvious even without the specific quote I dug up.
I don’t know what “a perfect format” means here, but if by this you mean “something which is clearly the thing being claimed, and not plausibly some other thing, or a thing that maybe doesn’t exist but maybe does, etc.”, then yes, a “perfect format” is indeed the only acceptable format.
That is absolutely not the appropriate thing to do when one’s interlocutor is the administrator of a forum who is in the process of banning one from that forum. Some cases are more ambiguous, but this one’s not.
And, I repeat, all of this is especially true given your track record on this subject.
Why oh why would it somehow no longer be part of appropriate conduct to be a reasonable interlocutor trying to help readers come to true beliefs if you are in the process of getting banned? I mean, I agree that ultimately you do not have that much more to lose, so IDK, you can make this choice, I can’t double-ban you, but it still seems like a dick move.
No, the thing I said is that people cite you as the reason for not wanting to post on LW. I didn’t make the claim that any such statement was easily extracted from context, or was somehow perfectly unambiguous, or any such thing. Even if Duncan had never made the specific comment I quoted, it would still be obvious to any informed reader that my summary (of Duncan’s take) was accurate. It would just require reading a bunch more comments to make an inference.
[this comment is >90% theoretical, i.e. not specifically about this thread / topic] [“topic nonspecific”? “topic abstracted”? not off-topic, but not indexed to the specific situation; not meta, but not very object-level]
I’m not familiar with the whole Said context, but just from perusing this thread, it sounds like he is at least presenting himself as behaving in order to create / maintain / integrate into some set of discourse norms. Presumably, he views those norms as more likely to be good (truth-tracking, successful, justice-making, what have you) than feasible alternatives. In that context, the issue of cognitive labor is a central one.
I just want to flag that I think there are probably major theoretical open questions here. It seems that Said disagrees, in that he performs a presumption that his norms and his implementations are correct. (Or perhaps it is not a disagreement, but a merely-performative stance, perhaps as a method of asserting those norms.)
Example of open question: how do you deal with claims that summarize things, but that are somewhat hard to verify or to publicly demonstrate? E.g. Habryka says “lots of people cite Said as XYZ”. Some of that will be private communications that should not be shared. How to deal with this? In legal contexts that’s not admissible, but that’s not necessarily a great answer outside of highly adversarial contexts. Some of those citations will be not exactly private, but difficult to track down / summarize / prove concisely. How to deal with that?
It sounds like a really obvious basic question, where there shouldn’t be any easy progress to be made—but I’m not even sure about that!
(Further, it’s part of the disagreement here, and maybe in many of Said’s interactions: the question “Examples?”, if we drill down into the agentic matrix of discourse, is a values assertion (e.g. a bid for extension of credit; a bid for cognitive resources; or an assertion that cognitive resources are owed; or a claim of surprising shared value; etc.). In the cases where “Examples?” is an assertion that the author owes the public some cognitive resources (or, maybe or maybe not equivalently: the best distribution of computation would have the author work to give examples here and now), the question is raised about the right distribution of cognitive work. And the answer is quite non-obvious and most likely context specific! For example, an expert (e.g. a professor) might end up being dismissive, or even disdainful, toward a bright-eyed curious undergrad. In many cases this is at least a tragedy, if not a downright moral crime; but in some cases, despite appearances, it is actually correct. The undergrad must learn at some point to think on zer own, and prune zer own babble, and extract more useful bits from experts per time.)
For example: Sometimes if Alice makes a summarizing claim X, and Bob asks Alice for demonstrations, Alice should be able to say “Maybe I will provide that, but first I would like you to actually stake some position—claim “not X”, or say that you are confused about what X means, or claim that X is irrelevant; or if you are not willing to do that right now, then I want you to first go investigate on your own until you reach a preliminary conclusion”. This sort of pattern might currently be insufficiently “ennormed”—in other words, even if Alice is comfortable saying that and aware of it as an option, she might correctly expect others to have a blanket view that her response is, unconditionally, inappropriate. (E.g., Said might say that this response is blanket inappropriate for some roles that Alice is playing in a conversation.)
I never claimed that it would “no longer be part of appropriate conduct to be a reasonable interlocutor trying to help readers come to true beliefs”, so this is a strawman. The relevance of the situation, and its effect on epistemic conduct, is explained in my earlier comment.
And if the claim you want to make is “Duncan never said X, but it’s obvious that he believes X”, then you should make that claim—which is a different claim from “Duncan said X”.
But that’s of course not what I said. I did not say “Duncan said X”. I said (paraphrased) “Duncan cited X in the context of Y” and “[Duncan] made a statement to this affect on LW”.
I am dropping out of this thread. It seems as productive as many of the threads have been with you.
Someone else should feel free to pick it up and I might respond more. I do think there are potentially valuable points to be made around the degree to which this decision was made as a result of author complaints, what actual authors on LW believe about your contributions, etc. But this specific subthread seems pretty evidently a waste of time.
He did not say that they made such claims on LessWrong, where he would be able to publicly cite them. (I have seen/heard those claims in other contexts.)
If someone (supposedly) says something to you in private, and you report this (alleged) conversation in public, then as far as public knowledge is concerned, is not correct to say that it has “turned out to be straightforwardly true” that that (alleged) conversation took place. Nothing has “turned out” in any way; there’s just a claim that’s been made—that is all.
This is also my sense of things.
To me, this reads like a claim that it would be meritorious to respond in such a way, because it embodies some virtue or achieves some consequence. (Elsewhere, I claimed that I had no personal problem with Said’s comments and someone privately replied to me “shouldn’t you, if you believe he’s burning the commons?”. I’m still considering it, but I suspect “keep your identity small” reasons will end up dominating.)
What’s the virtue or consequence that you’re focused on, here?
A longer quote, for context and easier readability:
The virtue is simply that one should object to tendentious and question-begging formulations, to sneaking in connotations, and to presuming, in an unjustified way, that your view is correct and that any disagreement comes merely from your interlocutor having failed to understand your obviously correct view. These things are bad, and objecting to them is good.
Just noting that
is a strong argument for objecting to the median and modal Said comment.
If you see me doing any such things, you should definitely object to them.
As I do not in fact make a habit of doing such things, I have no fear of my median and/or modal comments falling afoul of such objections.
EDIT: Well. I guess I should amend this reply somewhat. In the counterfactual scenario where I were not banned from LessWrong, I would say the above. In actuality, it would obviously be unfair for you to object to any of my comments (by means of replying to them, say), as I would not be able to respond (and, as far as I know, there is no UI indicator along the lines of “user A has been banned, and thus cannot reply to this reply by user B to his comment”).
However, I welcome objections, criticisms, etc., in any public venue where I can respond, such as on Data Secrets Lox.
I think this reply is rotated from the thing that I’m interested in—describing vice instead of virtue, and describing the rule that is being broken instead of the value from rule-following. As an analogy, consider Alice complaining about ‘lateness’ and Bob asking why Alice cares; Alice could describe the benefits of punctuality in enabling better coordination. If Alice instead just says “well it’s disrespectful to be late”, this is more like justifying the rule by the fact that it is a rule than it is explaining why the rule exists.
But my guess at what you would say, in the format I’m interested in, is something like “when we speak narrowly about true things, conversations can flow more smoothly because they have fewer interruptions.” Instead of tussling about whether the framing unfairly favors one side, we can focus on the object level. (I was tempted to write “irrelevant controversies”, but part of the issue here is that the controversies are about relevant features. If we accept the framing that habryka knows something that you don’t, that’s relevant to which side the audience should take in a disagreement about principles.)
That said, let us replace the symbol with the substance. Habryka could have written:
In my culture, I think the effect of those two paragraphs would be rather similar. The question of whether he or you is right about propriety for LessWrong is stored in the other words in the post, in the other discussion elsewhere, and in the legitimacy structures that have made habryka an admin of LW and how they react to this decision. I think very little of it is stored in the framing of whether this is an intractable disagreement or a failure of education.
I also don’t find the charge that it is “tendentious” all that compelling because of the phrase “by my lights”. Habryka has some reasons to think that his views on how to be a good commenter have more weight than just being his opinions, and shares some of those reasons in the rest of the post, but the sentence really is clear about that your comments are disappointing according to his standards (which could clearly be controversial).
In your culture, are the two highly different? What is the framework I could use to immediately spot the difference between the paragraphs?
Not Said, but i agree with him this paragraph wasn’t good, and want to explain why.
I actually think what Habryka originally wrote contained implicit claims that your rephrasing does not, and those claims are true and important.
it is unfortunate those claim was made implicitly, as it make it harder to notice and discuss it. in my model of the world, there are, indeed, things that Said does not able to see or understand and other people do. and the important disagreement is about the realness of the things.
the virtue in claiming things explicitly and not implicitly is it make it easier to understand the structure of the disagreement, to notice cruxes, to model the world, and each other different maps, and what can change one’s mind.
in my model of the world, people who disagree with Said often understand 90%+ of his claims, while he understand 60%- of theirs, and it is not the full reason of why i believe he wrong here, but it is strong heuristic, with casual path to being right or wrong about things, and maybe third of my reasons.
It possible I’m projecting here implications that are not actually there, or that derived from the history of Said’s interactions in LW. but if i indeed erred that way, it’s too reason to avoid this ambiguity.
Disagree. Of course it’s by his lights. How else could it be? It’s his standards, which he believes are the correct ones. That phrase adds nothing. It’s contentless boilerplate.
(This is a frequent feature of the sort of writing which, as I have said many times, is bad. If you say “X is true”, you are claiming to believe that X is true. There is no need to add a disclaimer that you believe that X is true. We know that you believe this, because you’re claiming it.)
(Now, sometimes one might say such a thing as a rhetorical flourish, or to highlight a certain aspect of the discussion, or for other such reasons. But the idea that it’s necessary to add such a disclaimer, or that such a disclaimer saves you from some charge, or whatever, because the disclaimer communicates some important difference between just claiming that X is true and claiming that you believe that X is true, is foolishness.)
FWIW, this guess is so far removed from being right that I have trouble even imagining how you could have generated it. (Yet another in a very long series of examples of why “interpretive labor” is bad, and trying to guess what one’s interlocutor thinks when you already know that you don’t understand their view is pointless.)
He could have written that, yes. But it would have been a strange, unnatural, and misleading thing to write, given the circumstances. The formulation you offer connotes a scenario where two parties enter into discussions and/or negotiations as equals, without presupposing that their own view is necessarily correct or that no compromises will need to be made, etc. But of course nothing remotely like that was the case. (The power relation in this case has always been massively asymmetric, for one thing.)
And, as I said, it’s also a strange thing to write. An admin is banning a member of a forum, because they can’t agree on proper rules/norms/practices…? Why should they need to agree? Doesn’t the admin just make rules, and if someone breaks the rules enough, ban them…? What’s all this business about “trying to reach agreement”? Why is that a goal? And why declare defeat on it now? And what does it have to do with banning?
So, in a certain sense, “the effect of those two paragraphs would be rather similar”, in that they would both be disingenuous, though in different ways (one weirder than the other).
One I like to use is “how would the other guy describe this?”. Another good one is “how would a reasonable, intelligent, but skeptical third party, who has no particular reason to trust or believe me, and is in fact mildly (but only mildly) suspicious of me and/or my motives and/or my ideas, read this?”.
What do you think, then? Why are those things bad and why is objecting to them good?
If you can’t answer those questions, then I’m not sure what arguments about propriety we could have. If we are to design functional site norms, we should be guided by goals, not merely following traditions.
(The point of interpretive labor, according to me, is to help defeat the Illusion of Transparency. If I read your perfectly clear sentence and returned back a gross misunderstanding—well, then a communication breakdown happened somewhere. By looking at what landed for me, we have a stacktrace of sorts for working backwards and figuring out what should have been said to transmit understanding.)
To be clear, we’re talking about:
And you want me to explain why these things are bad?
Well, the “sneaking in connotations” bit is a link to a Sequence post (titled, oddly enough, “Sneaking in Connotations”). I don’t think that I can explain the problem there any better than Eliezer did.
The other stuff really seems like it’s either self-explanatory or can be answered with a dictionary lookup (e.g., “begging the question”).
It’s not like we disagree that these things are bad, right? You’re doing, like, a Socratic thing; like, “why is murder bad?”—yeah, we all agree that murdering people is bad, but we should be able to explain why it’s bad, in order to write good laws. Yes?
If so, then—sure, I don’t in principle object to such exercises—on the contrary, I often find them to be useful—but why do this here, now, about these specific things? Why ask me, in particular? If we want to interrogate our beliefs about discussion norms in this sort of way, surely doing it systematically, and in a context other than a post like this, would make more sense…
On the other hand, if what you’re saying is that you disagree that the aforementioned things are bad, then… I guess I’m not sure how to respond to that, or what the point would even be…
Yes. Part of this is because my long experience is that sometimes our sense of communication or our preferences for norms have flipped signs. If you think something is bad, that’s moderate but not strong evidence that I think it’s bad, and we might be able to jump straight to our disagreement by trying to ground out in principles. I think in several previous threads I wish I had focused less on the leaves and more on the roots, and here was trying to focus on roots.
I mean, I am genuinely uncertain about several parts of this! I think that the audience might also be uncertain, and stating things clearly might help settle them (one way or the other). I think there is value in clear statements of differences of opinion (like that you have a low opinion of interpretative labor and I have a high opinion of it), and sometimes we can ground those opinions (like by following many conversations and tracking outcomes).
Like, I understand ‘tendentious’ to be a pejorative word, but I think the underlying facts of the word are actually appropriate for this situation. That doesn’t mean it’s generically good, just that criticizing it here seems inappropriate to me. Should we not invite controversy on ban announcements? Should we not explain the point of view that leads us to make the moderation decisions we make?
But perhaps you mean something narrower. If the charge is more “this is problem only a few users have, but unfortunately one of them is an admin, and thus it is the site rule”—well, we can figure out whether or not that’s the case, but I don’t actually think that’s a problem with the first paragraph, and I think it can be pointed at more cleanly.
As it happens, I reread that post thru your link. I thought that it didn’t quite apply to this situation; I didn’t see how habryka was implying things about you thru an argument via definition, rather than directly stating his view (and then attempting to back it up later in the post). I thought Frame Control would’ve been a better link for your complaint here (and reread our discussion of it to see whether or not I thought anything had changed since then).
I also didn’t quite buy that “begging the question” applied to the first paragraph. (For the audience, this is an argument that smuggles in its conclusion as a premise.) I understood that paragraph to be the conclusion of habryka’s argument, not the premise.
Overall, my impression was—desperation, or scrambling for anything that might stick? Like, I think it fits as a criticism of any post that states its conclusion and then steps thru the argument for that conclusion, instead of essaying out from a solid premise and discovering where it takes you. I think both styles have their virtues, and think the conclusion-first style is fine for posts about bans (I’ve used it for that before), and so I don’t find that criticism persuasive. (Like, it’s bad to write your bottom line and then construct the argument, but it’s not bad to construct an argument and then edit your introduction to include your conclusion!)
But maybe I missed the thing you’re trying to convey, since we often infer different things from the same text and attend to different parts of a situation. I tried to jump us to the inferences and the salient features, and quite possibly that’s not the best path to mutual understanding.
Some people realize that their position is a personal one; others assume that their position is standard or typical. Such phrases are often useful as evidence that the person realizes that fact; of course, since they can be easily copied, they are only weak evidence. “Strawberry is a better flavor, according to me” is a different sentence from “Strawberry is a better flavor”, and those two are yet again different from “Four is larger than two.” Adding ‘according to me’ to the last option would be a joke.
I think a frequent source of conflict has been differing judgments on what is usual and what is unusual, or what is normal and what is abnormal.
I understood us not to be discussing power relations (was anyone ever confused about who was the admin of LessWrong?) but something more like legitimacy relations (what should be the rules of LessWrong?). You’ve been here longer; you might know the Sequences better; you might have more insight into the true spirit of rationality than habryka. In order to adjudicate that, we consult arguments and reasons and experience, not the database.
Using the lens of power relations, your previous complaint (“This phrasing assumes”) seems nonsensical to me; of course the mod would talk about educating the problem user, of whether they understand and learn the models and behaviors as handed down from on high.
Here I would like to take a step outward and complain about what I perceive as a misstep in the conversational dance. Having criticized habryka’s paragraph, you describe its flaws and went so far as to propose a replacement:
My replacement differs from yours. But I claim this criticism of my replacement (that it connotes a discussion of equals) applies just as readily to yours, if not more readily because my version includes the ban. (A more fair comparison probably ends at ‘on that goal’ and drops the last phrase.) If not, it is for minor variations of style and I suspect any operationalization we come up with for measuring the difference (polling Turkers or LLMs or whatever) will identify differences between their connotations as minor (say, a split more even than 66-34 on which connotes more even power relations).
Here my thoughts turn to the story in The Crackpot Offer, and the lesson of looking for counterarguments to your own counterarguments.
Here is a demonstration that adding those sorts of disclaimers and caveats does absolutely nothing to prevent the LW moderators from judging my comments to be unacceptable, as though no such disclaimers were present.
Note, in particular, that @Elizabeth’s “Note from the Sunshine Regiment” says:
This despite the fact that the comment in question was in fact filled with precisely such disclaimers—which the mods simply ignored, writing the moderator judgment as though no such disclaimers were there at all!
I’ve said before that I don’t take such suggestions (to add the disclaimers) at all seriously; and here we have an unambiguous demonstration that I am right to take that stance.
You wrote:
But of course his standards can’t be controversial, because he’s the admin. If someone disagrees with his standards—irrelevant; he doesn’t have to care. There is no practical difference between his standards and “the correct” standards, because he does not have any need to distinguish between those things. Therefore the “by my lights” clause is noise.
I understood us to be discussing a thing that Habryka wrote in the post. If the thing he wrote involves power relations, or connotations about power relations, then how can we not be discussing power relations…?
Why “of course”? I completely disagree with this.
I have had this disagreement with the LW mods before. It’s what motivated me to write “Selective, Corrective, Structural”. And my view on this remains the same as it was in 2018: that attempting to behave as a “corrective authority”, in the context of a forum like this, is weird and bad.
A moderator talking about “educating the problem user” is extremely suspect.
I… disagree, mostly. But also…
At this point… I am also confused about what it is we’re even talking about. What’s the purpose of this line of inquiry? With each of your comments in this thread, I have ended up with less and less of an idea of what you’re trying to ask, or say, or argue, or… anything.
Perhaps you could summarize/rephrase/something?
There are several. The overarching goal is that I want LessWrong’s contribution to global cognition to be beneficial. As a subgoal to that, I want LessWrong’s mod team to behave with integrity and skill. As subgoals to that, I’m trying to figure out whether there were different ways of presenting these ideas that would have either worked better in this post, or worked better in our discussions over the years at grounding out our disagreement; I’m also interested in figuring out if you’re right and we’re wrong!
Related to the last subgoal, I think your typology of selective/corrective/structural is useful to think about. I view us as applying all three—we screen new users (a much more demanding task now that LLMs are directing people to post on LessWrong), we give warnings and feedback and invest some in rationality training projects, and we think about the karma system and UI changes and various programs and projects that can cause more of what we want to see in the world. I don’t think behaving as a corrective authority is weird and bad; I think the polite and detailed version of “read the sequences” is good.
But more narrowly—looking at this conversational chain—you made a criticism of habryka’s post, and I tried to take it seriously. Does it matter that the post expresses or promotes a particular point of view? Does it matter that it’s controversial? What would it look like to fix the problems in the first paragraph? I left comments on an earlier draft of this post, and I tried to apply a framework like “how would the other guy describe this?”, and I missed those problems in the first paragraph. Tsuyoku Naritai.
[I think that you deserve me giving this a real try, and that the other mods deserve me attempting to get to ground on something with you where we start off with a real disagreement, or where I don’t understand your position.]
Reductionism—the idea that things are made out of parts. We can focus on different parts of it at different times. To me this also relates to the idea of True Rejections. If what you are objecting to is that habryka is banning you and that he’s the mod and you aren’t, then—I feel sympathy for you, but there’s really not much to discuss. I think there is a lot to discuss about whether or not it’s right for LW to ban you, because I am pretty invested in pushing LW to do the right thing. And that one is not a power relations question, and seems like one that we can discuss without power relations.
Yes, even if we construct airtight arguments, habryka might still ignore them and go through with the ban anyway. Yes, some people will reflexively support the mods because they like the website existing and want to subsidize working on it. But some people are watching and thinking and deciding how to relate to LW moving forward based on how these arguments shake out. That is...
I think there are meaningful stakeholders whose disapproval would sink habryka’s ability to run LessWrong, and I think attempting to run LessWrong in an unethical or sloppy way would lead to the potential benefits of the site turning to ash.
(I also think this is a nonstandard usage of ‘controversial’. It just means ‘giving rise to public disagreement’, which moderation decisions and proposed norms and standards often do. Like, you’re controverting it right now!)
Returning to true rejections—suppose a fundamental issue here is that you have one vision for LW, where there’s no corrective authority, and we have a different vision for LW, where there is corrective authority. Then I think either we find out why we want those things and identify cruxes and try to learn more about the science of communication and moderation so that we can better achieve our shared goals, or we decide that our goals are sufficiently in conflict that we should pursue them separately. And, like, the value I see in habryka’s offer to edit in your text to the post is that you can make your pitch for your vision, and maybe people who prefer that vision will follow you to Data Secrets Lox, and the more clarity we can reach the more informative that pitch can be.
Ok, fair enough.
I also think this… I think? I guess it depends on what exactly you mean by “the polite and detailed version”.
But, uh, I must protest that I definitely have read the sequences. I have read them several times. If these attempts, by the mods, at “correction”, are intended to be any version (polite or otherwise) of “read the sequences”, then clearly someone here is very confused, and I don’t think that it’s me. (Indeed, it usually seems to me as though the people I am arguing with, e.g. Habryka, are the ones who need to be told to read the Sequences!)
Well, for one thing, I don’t actually think that the concept of “true rejections” is as useful as it’s been made out to be. I think that in practice, in many or maybe even most cases when someone opposes or rejects or dislikes something, there just is not any such thing as some single “true rejection”.
That aside—well, sure, obviously I object to being banned, that goes without saying; but no, that wasn’t at all the point that I was making in that comment.
As for whether it’s right for LW to ban me—again I think it’s pretty obvious what my position on that question is. But that, too, was not my point.
Eh?? What do you mean, “might”?! As far as I am aware, there is no “might” here, but only a decision already made!
Is this not the case? If so, then I think this should really be made clear. Otherwise, I must say that I do not at all appreciate you talking as if the decision isn’t final, when in fact it is.
Sure, in a very circumscribed way (I’m not even allowed to upvote or downvote comments outside of this top-level thread—Habryka made sure to send me a message about that!), and only until the ban proper takes effect.
Well, I’d certainly like to believe so. I find these vague references to “stakeholders” to be suspect at the best of times, though.
Everything else aside, let me address the Data Secrets Lox point first. While I would of course be delighted if people who have found my writing here on LW useful joined DSL, and of course everyone here who wants to join is welcome to do so, I must note that DSL is not really “LessWrong, done the way that Said thinks it should be done”; it wasn’t intended to be such a thing. I would call DSL a “rationalist-adjacent”, general-interest discussion forum. It’s not really aiming at anything like the same goals as LW is.
Anyhow, yes, sure, this is all fine, finding out why we want things, all of that is good. It seems rather “too little, too late”, though. I’ve been making my “pitch” for years; I’ve been explaining why I want things, what I think is the right way to run a forum like this and why I think those things, etc. The amount of uptake of those ideas, from the LW mods’ side, has been approximately zero. (Even when I have offered to provide free design and development work in the service of making those ideas happen—an offer which, as I expect you know, is not an idle one, when coming from me!—still, nothing.) Well, alright, obviously you have no obligation to find my views compelling and my arguments convincing, but my point is that this thing you propose has already been tried. At some length.
So… I am somewhat less than enthusiastic.
But! Despite all that, let’s give it a shot anyway. To the object level:
As I wrote earlier, an honest version of that paragraph would say:
“I have had disagreements with Said; we have discussed, debated, argued; I remain convinced of my view’s correctness.”
Obviously that’s an incomplete replacement, so let’s try to write the full one. It might look like this (we’ll leave the first sentence as it is):
“For roughly equally long have I spent around one hundred hours almost every year discussing, debating, and arguing with Said about norms, rules, and practices of forum moderation. These discussions and arguments have often taken place in the context of moderation actions taken, or considered, against Said (whose comments, and interactions with other site members, I have often found to be problematic; although Said, of course, disagrees, for what he believes to be principled reasons). Despite those discussions and arguments, our disagreements remain; I remain convinced of my view’s correctness. Today I am declaring defeat on the goal of convincing Said that I am right and he is wrong (and to alter his behavior accordingly). I am thus giving him a 3 year ban.”
I wouldn’t call this perfect, exactly, but it would be a great improvement.
Note that the above passage is basically honest (though a bit oblique) in making explicit the relevant power relations. It is also honest about the relative “consensus value” of the opposing views (namely, that they’re equal in both being “I think this and he thinks that”, no more and no less, with no very strong reason to assume that one side is right). The formulation also prompts, from the reader, the obvious question (“well, maybe you aren’t right, eh? maybe the other guy’s right and you’re wrong?”), which is exactly as it should be.
Note, by the way, that—unlike with the text of the actual first paragraph as it stands in the post—an alert reader will come away from the passage above with a vague sense that the decision that’s been reached is a rather odd one, reached for rather odd reasons. This, too, is exactly as it should be. The text of the post does attempt to address the sorts of questions that such a vague sense might rightly be operationalized as (such as “eh, if this guy broke your rules, why didn’t you just ban him a long time ago? … he did break your rules, right? otherwise why would you ban him”), but it’s important that the reader should notice the problem—otherwise, they will not be able to effectively evaluate the attempt to resolve it.
Then perhaps I misunderstand what you mean by “corrective authority”? It seems to me like “read the Sequences” is an example of “apply such measures as will make the people in your system alter their behavior, to conform to relevant optimality criteria”. But then I find it difficult to square with:
Perhaps the difference is between “read the sequences” and “if you keep posting low-quality comments, we will ban you, and this part of the sequences explains the particular mistake you made here”? Or perhaps the difference is between the centralized moderator decision-making (“this comment is bad because Alice says so and her comments have a fancy border”) and decentralized opinion-aggregation and norm enforcement (“this comment is bad because its net karma is negative”)?
There is a different way to make things coherent, of course, which is that as part of the transition to LW 2.0 the mod team attempted to shift the culture, which involved shifting the optimality criteria, and the objection to us being corrective authorities in this way is not an objection to corrective authority as a method but instead an objection to our target. Which, that’s fair and not a surprise, but also it seems like the correct response to that sort of difference is for us to shake hands and have different websites with different target audiences (who are drawn to different targets). Otherwise we’ll just be locked in conflict forever (as happens when two control systems are trying to set the same variable to different reference values) and this doesn’t seem like a productive conflict to me. (I do think we’ve written about culture and Zack has written about culture downstream of this disagreement in a way that feels more productive than the moderation discussions about specific cases, but this feels way worse than, say, artists jockeying for status by creating new pieces of art.)
I think this is correct, in that many decisions are made by aggregating many factors, and it’s only rarely going to be the case that a single factor (rather than a combination of factors) will be decisive.
(I do note this is a situation where both of us ‘disagree with the Sequences’ by having a better, more nuanced view, while presumably retaining the insight that sometimes decisive factors are unspeakable, and so discussions that purport to be about relevant information exchange sometimes aren’t.)
Fair. I think it is challenging to express the position of “New information could persuade me, but I don’t expect to come across new information of sufficient strength to persuade me.”
(On the related stakeholders point: I agree that it is often vague, but in this specific case I’m on the board that can decide to fire habryka, and one of the people who is consulted about decisions like this before they’re made. I suspect that in the counterfactual where I left the mod team at the start of 2.0, you would have been banned several years earlier. This is, like, a weird paragraph to write without the context of the previous paragraph; I was in fact convinced this time around, and it is correspondingly challenging to convince me back the other direction, and it seems cruel to create false hope, and difficult to quantitatively express how much real hope there is.)
Indeed; I have appreciated a lot of the work that you’ve done over the years and am grateful for it.
Something about the “consensus value” phrasing feels off to me, but I can’t immediately propose a superior replacement. That is, it would be one thing if just Oli disagreed with you about moderation and another different thing if “the whole mod team disagrees with Said about moderation”. The mods don’t all agree with each other—and it took us years to reach sufficient agreement on this—but I do think this is less like “two people disagree” and more like “two cultures are clashing”.
That said, I do think I see the thing that I could have noticed if I were more alert, which is that I already had the view that we were optimizing for different targets, and making that the headline has more shared-reality nature to it. Like, I think the following framing is different from yours but hopefully still seems valid to you:
Sure; my point was just that it’s more like either “two people disagree” or “two cultures are clashing” than it is like “physicists are explaining Newtonian mechanics to the Time Cube guy”.
Yes, that would also be basically fine.
I started writing a reply to your other comment, when I noticed that my last comment in reply to you had been strong-downvoted.
(By a mod, obviously. Who else has a strength-10 vote and is following this discussion so closely?)
Indeed, I notice that the mods (yes, obviously it’s the mods) have been strong-downvoting pretty much all of my comments in this discussion with you.
So, before I continue engaging, I really do have to ask: this project of yours, where you are engaging in this apparently good-faith discussion with me, trying to hash out disagreement, etc.—what do the other mods think of it?
Is this just you on your own quixotic sidequest, with no buy-in from anyone else who matters?
If that’s the case, then that seems to make the whole thing rather farcical and pointless.
(Really, strong-downvoting a reply, to a moderator, written on that moderator’s request! If we want to talk about problems with voting behaviors, I’d suggest that the mods start by looking in the mirror.)
I asked in the sunshines channel on the LW slack and people there said that they were voting comments based on quality as a comment, and while one is downvoting many of your comments on the page overall, was not downvoting the majority of the comments in this thread.
There are more 10-strength users than just the mods; it may be the case that enough of them are downvoting comments that are at positive karma but leaving the −8 comments alone, which results in no one person downvoting more than a few comments in the thread, but the comments being underwater as a whole. But if there is a single mod who is trying to make this thread not happen, they’re not telling me (which seems worth doing because it would affect my behavior more than the downvoting would). [Edit: the person who did the database query clarified, and I now think that the votes are primarily coming from mods.]
I made the classic mistake of ‘asking two questions together’ and so primarily got responses on voting behavior and not what they think of the project, but I would (from their other writing) guess they are mostly out of hope about it.
I’m not sure if it was a mod, but the existence of high-strength votes and people willing to use them liberally seems like a problem to me. I also have a 10-strength vote but almost never use it because I don’t trust my own judgment enough to want to strongly influence the discourse in an unaccountable way. But others apparently do trust themselves this way, and I think it’s bad that LW gives such people disproportionate influence.
FWIW, my guess is the site would be in a better place if you voted more, and used your high vote-strength more. My guess is you would overall add a bunch of positive signal, much more than an average commenter, which is why it IMO makes sense for your votes to have a lot more weight.
I do think voting around the zero point tends to be more whack and have a bunch of more complicated consequences, and often a swing of 10 points feels disproportionate to what is going on when a comment is between 1 and 10 karma. I’ve considered making various changes to the vote system to reduce the effects of this, but haven’t found something worth the tradeoff in complexity.
A commenter writes:
This strikes me as either deeply confused, or else deliberately… let’s say “manipulative”[1].
Suppose that I am a moderator. I want to ban someone (never mind why I want this). I also want to seem to be fair. So I simply claim that this person requires me to spend a great deal of effort on them. The rest of the members will mostly take this at face value, and will be sympathetic to my decision to ban this tiresome person. This obviously creates an incentive for me to claim, of anyone whom I wish to ban, that they require me to spend much effort on them.
Alright, but still, can’t such a claim be true? To some degree, yes; for example, suppose that someone constantly lodges complaints, makes accusations against others, etc., requiring an investigation each time. (On the other hand, if the complaints are valid and the accusations true, then it seems odd to say that it’s the complainant/accuser who’s responsible for the workload involved in dealing with the issues.) Of course, that doesn’t apply here; I don’t complain much, on LessWrong.
Well, but surely the LW mods spent all those hours on something, right? Writing comments. Talking to various people. Well, yes. But… LessWrong isn’t a government agency, or a court of law, or a corporation with contractual obligations to members, etc. The mods weren’t obligated to do any of those things. It would have been very easy for them to avoid spending all that effort. The following scenario illustrates how they might’ve done so:
Carol (a LessWronger): I wrote a post on LessWrong, and this one dude wrote a comment on it, where he criticized me unfairly!
Dave (a moderator of LessWrong): People write all sorts of comments
Carol: I found it very unpleasant!
Dave: Downvote it and move on with your life
Carol: But other people upvoted it!
Dave: They’re allowed to do that
Carol: Aren’t you going to do something about this?
Dave: No, why would we
Carol: Because that guy’s comment was wrong!
Dave: Feel free to reply saying that, I guess
Carol: Ugh! That would be even more unpleasant! I shouldn’t have to do that!
Dave: shrug
Carol: Well! I don’t think I’ll be using this website!
Dave: Sure, that’s your right
Pretty easy. Definitely doesn’t require hours, much less tens of hours, much less hundreds of hours.
Of course, Dave could choose to have a longer discussion with Carol, if he wants. He could join the conversation himself, to facilitate communication between Carol and the author of the offending comment. He could do all sorts of things. But he could also… not do any of those things.
And in almost all cases where the LW moderators did anything whatsoever that had anything to do with me, it was the wrong thing to do, and the far superior choice (not necessarily the best choice, but far better than what they in fact did) would have been, precisely, to do absolutely nothing. In pretty much all of the examples given in the OP, doing nothing at all would’ve been a huge improvement. Writing no long comments. Having no long conversations with anyone. Just… nothing.
So, indeed, it is right to question the wisdom of the moderators in the choices they’ve made! But to speak of their “restraint” is absurd. These problems, all of these terrible mountains of effort which they’ve supposedly had to expend—it’s all been self-inflicted.
And to use such self-inflicted problems to justify banning someone—well. It’s approximately as honest as a schoolyard bully saying “I bruised my hand when I was beating you up for your lunch money, so now you owe me, and I’m gonna take your jacket as payment!”.
Not quite right, but the closest I can get without a long digression.
Yep, I agree with this as a common and IMO very perverse dynamic. I don’t think someone being “difficult to moderate” is almost ever an appropriate justification for banning someone. At the very least they must also have some property that requires interfacing with them as a subject of moderation that isn’t located solely in the choice of the moderators. Otherwise this becomes a catch-22 with no grounding in reality.
In response to a comment by @clone of saturn, @habryka writes:
This is a thoroughly disingenuous response—so misleading as to be indistinguishable from a lie.
Consider the comment of mine to which Habryka refers. Here is its text in its entirety:
Why did I write this comment? Because the post began by asking:
In other words, the OP asked a question. And I answered it.
Note that:
I explicitly marked my answer to the OP as “my take”, said that Double Crux “seems like” a certain thing, that there “does not seem to be” a reason to pay attention to it, and that it not getting uptake is the default outcome “that I would expect”. Even my parenthetical about CFAR was explicitly and repeatedly noted to be my personal opinion.
There are all the disclaimers that people keep saying I should add! And yet, somehow, this turns out to make no difference at all, and still incurred a visit from the “Sunshine Regiment”.
My comment does not criticize the post at all. There is nothing in the comment that is at all critical of the post itself, or of its author (@Raemon) for writing it. On the contrary, I take the post at face value, and provide a good faith answer directly to the central question which the post asks.
In other words, this comment is the most cooperative possible engagement with the post, precisely as the post itself requests (“discussion-prompt”).
And yet, despite all that, it was heavily downvoted, and incurred a moderator warning.
I can conclude only that when the moderators talk about what behavior they would like to see, what is rewarded, and what is punished, they are simply lying.
Note, in particular, that @Elizabeth’s “Note from the Sunshine Regiment” (i.e., the moderator judgment on the linked comment) says:
This despite the fact that the comment in question was in fact filled with precisely such disclaimers—which the mods simply ignored, writing the moderator judgment as though no such disclaimers were there at all!
The most charitable interpretation I can think of is that Elizabeth meant you should have added “I think that...” or ”...for me” specifically to the line “Also, it comes from CFAR, which is an anti-endorsement.”
But regardless, it seems crazy that your comment was downvoted to −17 (-16 now, someone just upvoted it by 1) and got a negative mod judgment for this.
Crazy indeed. And—as in several other of these example cases—I will note that the author of the post himself evidently had no problem with my comment, and had no difficulty writing a perfectly reasonable reply, which resulted in an entirely civil and productive discussion.
Which makes this yet another example of the pattern where, if the mods had simply left it alone, it would’ve been fine. (Even better would’ve been for them to write something like “yes of course it’s ok to write polite, clearly cooperative, mildly critical comments like this, don’t be silly”, but we can’t expect miracles…)
Commenter @Gordon Seidoh Worley writes:
While of course you should not trust my self-report on this, I will nonetheless note for the record that I have made no special attempt to alter my commenting style or approach, recently or (to my recollection) ever.[1]
I am glad to hear that you’ve found my comments to be useful (assuming that this is what you meant by “positive”).
Excepting, of course, specific adjustments to conform to specific new rules, etc.
I have a question for you, Said.
If I understand correctly, a big part of the problem is that people perceive your comments as having a certain hostile-flavored subtext. You deny that this subtext is actually present and fault them for inferring things that you hadn’t stated explicitly.
I strongly suspect that you are capable of writing in such a way where people don’t perceive this hostile-flavored subtext. A softer, gentler type of writing.
Assuming that you are in fact capable of this type of writing, my question is why you choose to not write in this manner.
I’ve had this specific conversation, what feels like a dozen times now. This exact question has been asked, answered, discussed, argued, absolutely to death.
I can dig up links if you really want me to, but… should I? Are you asking because you’re unaware of the prior art? Or are you aware of the previous discussions, but find them unsatisfactory somehow? Or something else?
Why do people downvote such a comment, exactly?
That makes sense. I am not familiar with such previous conversations, haven’t really been following any of this, and didn’t read the OP too thoroughly. I am not motivated to dig up previous conversations. If you or someone else would like to I’d appreciate it, but no worries if not.
Alright. Well, here’s one starting point, I guess. (You can also Cmd-F in the comments on that post for “insult” and “social attack”; I think that should get you to most of the relevant subthreads.)
(There are many other examples, but this will do for now.)
I spent a few minutes trying to do so and feel overwhelmed. I’m not motivated to continue.
Edit:
If you wouldn’t mind, I’d appreciate a concise summary. No worries if you’d prefer not to though.
In particular, I’m wondering why you might think that your approach to commenting leads to more winning than the more gentle approach I referred to.
Is it something you enjoy? That brings you happiness? More than other hobbies or sources of entertainment? I suspect not.
Are your motivations altruistic? Maybe it’s that despite being not fun to you personally, you feel you are doing the community a service by defending certain norms. This seems somewhat plausible to me but also not too likely.
My best guess is that the approach to commenting you have taken is not actually a thoughtful strategy that you expect will lead to the most winning, but instead is the result of being unable to resist the impulse of someone being wrong on the internet. (I say this knowing that you are the type of person who appreciates candidness.)
Replying to the added-by-edit parts of the parent comment.
My approach to commenting is the correct one.
(Or so I claim! Obviously, others disagree. But you asked about my motivations, and that’s the answer.)
Part of the answer to your question is the “gentle approach” you refer to is not real. It’s a fantasy. In reality, there is my approach, and there are other approaches which don’t accomplish the same things. There is no such thing as “saying all the same things that Said says, but more nicely, and without any downsides”. Such an option simply does not exist.
Earlier, you wrote:
Well, setting aside the question of whether I can write in a “softer, gentler” way, it’s clear enough that many other people can write like that, and often do. One can see many examples of such writing on the EA Forum, for instance.
Of course, the EA forum is also almost entirely useless as a place to have any kind of serious, direct discussion of difficult questions. The cause of this is, largely, a very strong, and zealously moderator-enforced, norm for precisely that sort of “softer, gentler” writing.
Regardless of whether I can write like that, I certainly won’t. That would be wrong, and bad—for me, and for any intellectual community of which I am a member. To a first approximation, no one should ever write like that, on a forum like LessWrong.
Indeed I do appreciate candidness.
As far as “the most winning” goes, I can’t speak to that. But the “softer, gentler” path is the path of losing—of that, I am very sure.
As far as the xkcd comic goes… well. I must tell you that, while of course I cannot prove this, I suspect that that single comic is responsible for a large chunk of why the Internet, and by extension the world, is in the shape that it’s in, these days.[1] (Some commentary on my own views on the subject of arguing with people who are wrong on the internet can be found in this comment.)
I am not sure if it’s worse than the one about free speech as far as long-term harm goes, but xkcd #386 at least a strong contender for the title of “most destructive webcomic strip ever posted”.
Thank you for the response.
Given your beliefs, I understand why you won’t apply this “softer, gentler” writing style. You would find it off-putting and you think it would do harm to the community.
There is something that I don’t understand and would like to understand though. Simplifying, we can say that some people enjoy your engagement style and others don’t. What I don’t understand is why you choose to engage with people who clearly don’t enjoy your engagement style.
I suspect that your thinking is that the responsibility falls on them to disengage if they so desire. But clearly some people struggle with that (and I would pose the same question to them as well: why continue engaging). So from your perspective, if you’re aiming to win, why continue to engage with such people?
Does it make you happy? Does it make them happy? Is it an altruistic attempt to enforce community norms?
Or is it just that duty calls and you are not in fact making a conscious attempt to win? I suspect this is what is happening.
(And I apologize if this is too “gentle”, but hey, zooming out, being agent-y, and thinking strategically about whether what you’re doing is the best way to win is not easy. I certainly fail at it the large majority of the time. I think pretty much everyone does.)
None of the above.
The answer is that thinking of commenting on a public discussion forum simply as “engaging with” some specific single person is just entirely the wrong model.
It’s not like I’m having a private conversation with someone, they say “Um I don’t think I want to talk to you anymore” and run away, and I chase after them, yelling “Come back here and respond to my critique! You’re not getting away from me that easily! I have several more points to make!!”, while my hapless victim frantically looks for an alley to hide in.
LessWrong is a public discussion forum. The point of commenting is for the benefit of everyone—yourself, the person you’re replying to, any other participants in the discussion, any readers of the discussion, any future readers of the discussion…
Frankly, the view where someone finding your comments aversive is a general reason to not reply to their comments or post under their posts, strikes me as bizarre. Why would someone who only considered the impact of their comments on the specific user they were replying to, even bother commenting on LessWrong? It seems like a monstrously inefficient use of one’s time and energy…
EDIT: See this comment thread for more on this subject.
Let me make this more concrete. Suppose you are going back and forth with a single user in a comments thread—call them Bob—and there have been nine exchanges. Bob wrote the ninth comment. You get the sense that Bob is finding the conversation unpleasant, but he continues to respond anyway.
You have the option of just not responding. Not writing that tenth comment. Not continuing to respond in that comment thread at all. (I don’t think you’d dispute this.)
And so my question is: why write the tenth comment? You point out that, as a public discussion forum, when you write that tenth comment in response to Bob, it is not just for Bob, but for anyone who might read or end up contributing to the conversation.
But that observation itself is, I think you’d agree, insufficient to explain why it’d make sense to write the tenth comment. To the extent your goals are altruistic, you’d have to posit that this tenth comment is having a net benefit to the general public. Is that your position? That despite potentially causing harm to Bob, it is worth writing the tenth comment because you expect there to be enough benefit to the general public?
Why not write the tenth comment…? I mean, presumably, in this scenario, I have some reason why I am posting any comments on this hypothetical thread at all, right? Some argument that I am making, some point that I am explaining, some confusion that I am attempting to correct (whether that means “a confusion on Bob’s part, which I am correcting by explaining whatever it is”, or “a confusion on my part, which I think that the discussion with Bob may help me resolve”), something I am trying to learn or understand, etc. Well, why should that reason not still apply to the tenth comment, just as it did to the first…?
I don’t accept this “causing harm to Bob” stipulation. It’s basically impossible for that to happen (excepting certain scenarios such as “I post Bob’s private contact info” or “I reveal an important secret of Bob’s” or something like that; presumably, this is not what we’re talking about).
That aside: yes, the purpose of participating in a public discussion on a public discussion forum is (or should be!) public benefit. That is how I think about commenting on LessWrong, at any rate.
I will again note that I find it perplexing to have to explain this. The alternative view (where one views a discussion in the comments on a LessWrong post as merely an interaction between two individuals, with no greater import or impact) seems nigh-incomprehensible to me.
Thank you for clarifying that your motivation in writing the tenth comment is to altriusitically benefit the general public at large. That you are making a conscious attempt to win in this scenario by writing the tenth comment.
I suspect that this is belief in belief. Suppose that we were able to measure the impact of your tenth comment. If someone offered you a bet that this tenth comment would have a net negative overall impact on the general public, at 1-to-1 odds, for a large sum of money, I don’t think you would take it because I don’t think you actually predict the tenth comment to have this net positive impact.
Because you have more information after the first nine comments. You have reason to believe that Bob finds the discussion to be unpleasant, that you are unlikely to update his beliefs, and that he is unlikely to update yours.
Hm. “Cause” might be oversimplifying. In the situation I’m describing let’s suppose that Bob is worse off in the world where you write the tenth comment than he is in the counterfactual world where you don’t. What word/phrase would you use to describe this?
My belief here is that impact beyond the two individuals varies. Sometimes lots of other people are following the conversation. Sometimes they get value out of it, sometimes it has a net negative impact on them. Sometimes few other people follow the conversation. Sometimes zero other people follow it.
I expect that you share this set of beliefs and that basically everyone else shares this set of beliefs.
This is not an accurate summary.
It seems like you’re trying very hard to twist my words so as to make my views fit into your framework. But they don’t.
None of that is either particularly relevant to the considerations described, that affect my decision to write a comment.
I would describe it like you just did there, I guess, if I were inclined to describe it at all. But I generally wouldn’t be. (I say more about this in the thread I linked earlier.)
This seems to be some combination of “true but basically irrelevant” (of course more people read some comment threads than others, but so what?) and “basically not true” (a net negative impact? seems unlikely unless I lie or otherwise behave unethically, which I do not). None of this has any bearing on the fact that comments on a public forum aren’t just written for one person.
I usually find that I get negative value out of “said posts many comments drilling into an author to get a specific concern resolved”. usually, if I get value from a Said comment thread, it’s one where said leaves quickly, either dissatisfied or satisfied; when Said makes many comments, it feels more like polluting the commons by inducing compute for me to figure out whether the thread is worth reading (and I usually don’t think so). if I were going to make one change to how said comments, it’s to finish threads with “okay, well, I’m done then” almost all the time after only a few comments.
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
What specific practical difference do you envision between the thing that you’re describing as what you want me to believe, and the thing that you think I currently believe? Like, what actual, concrete things do you imagine I would do differently, if your wish came true?
(EDIT: I ask this because I do not recognize, in your description, anything that seems like it accurately describes my beliefs. But maybe I’m misunderstanding you—hence the question.)
well, in this example, you are applying a pattern of “What specific practical difference do you envision”, and so I would consider you to be putting high probability on that being a good question. I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already. and so in the original claim, I was saying that you seem to have frameworks that prescribe behaviors like “what practical difference”, which are things like—at a guess—“if a suggestion isn’t specific enough to be sure I’ve interpreted correctly, ask for clarification”. I do that sometimes, but you do it more. and there are many more things like this, the more general pattern is my point.
anyway gonna follow my own instructions and cut this off here. if you aren’t able to extract useful bits from it, such as by guessing how I’d have answered if we kept going, then oh well.
I see… well, maybe it will not surprise you to learn that, based on long and much-repeated experience, I consider that approach to be vastly inferior. In my experience, it is impossible for me to guess what anyone means, and also it is impossible for anyone else to guess what I mean. (Perhaps it is possible for other people to guess what other people mean, but what I have observed leads me to strongly doubt that, too.) Trying to do this impossible thing reliably leads to much more wasted communication. Asking is far, far superior.
In short, it is not that I haven’t considered doing things in the way that you suggest. I have considered it, and tried it, and had it tried on me, many times. My conclusion has been that it’s impossible to succeed and a very bad idea to try.
Hm. I’m realizing that I’ve been presuming that you are at least roughly consequentialist and are trying to take actions that lead to good consequences for affected parties. Maybe that’s not true though.
But if it is true, here is how I am thinking about it. We can divide affected parties into 1) you, 2) Bob, and 3) others. We’ve stipulated that with the tenth comment you expect it to negatively affect Bob. So then, I’d think that’d mean that your reason for posting the tenth comment is that you expect the desirable consequences for you and others to outweigh the undesirable consequences for Bob.
Furthermore, you’ve emphasized “public benefit” and the fact that this is a public forum. You also haven’t indicated that you have particularly selfish motives that would make you want to do things that benefit you at the expense of others, at least not to an unusual degree. So then, I presume that the expected benefit to the third group—others—is the bulk of your reason for posting the tenth comment.
I’m sorry that it came across that way. I promise that I am not trying to twist your words. I just would like to understand where you are coming from.
“Roughly consequentialist” is a basically apt label. But as I have written a few times, act consequentialism is pretty obviously non-viable; the only reasonable way to be a consequentialist is rule consequentialism.
This makes your the reasoning you outline in your second paragraph inapplicable and inappropriate.
I describe my views on this a bit in the thread I linked earlier. Some more relevant commentary can be found in this comment (Cmd-F “I say and write things” for the relevant ~3 paragraphs, although that entire comment thread is at least partly relevant to this discussion, as it talks about consequentialism and how to implement it, etc.).
Thanks for clarifying, Said. That is helpful.
I skimmed each of the threads you linked to.
One thing I want to note is that I hear you and agree with you about how these comments are taking place in public forums and that we need to consider their effects beyond the commenter and the person being replied to.
I’m interested in hearing more about why you expect your hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect. I will outline some things about my model of the world and would love to hear about how it meshes with your model.
Components of my model:
People generally don’t dig too deeply into long exchanges on comment threads. And so the audience is small. To the extent this is true, the effects on Bob should be weighed more heavily.
This hypothetical exchange is likely to be perceived as hostile and adversarial.
When perceived that way, people tend to enter a soldier-like mindset.
People are rather bad at updating their believes when they have such a mindset.
Being in a soldier mindset might cause them to, I’m not sure how to phrase this, but something along the lines of practicing bad epidemics, and this leading to them being weaker epistemically moving forward, not stronger.
I guess this doesn’t mesh well with the hypothetical I’ve outlined, but I feel like a lot of times the argument you’re making is about a relatively tangential and non-central point. To the extent this is true, there is less benefit to discussing it.
The people who do read through the comment thread, the audience, often experience frustration and unhappiness. Furthermore, they often get sucked in, spending more time than they endorse.
(I’m at the gym on my phone and was a little loose with my language and thinking.)
One possibility I anticipate is that you think that modeling things this way and trying to predict such consequences of writing the tenth comment is a futile act consequentialist approach and one should not attempt this. Instead they should find rules roughly similar to “speak the truth” and follow them. If so, I would be interested in hearing about what rules you are following and why you have chosen to follow those rules.
… I get the sense that you haven’t been reading my comments at all.
I didn’t claim that I “expect [my] hypothetical tenth comment in this scenario we’ve been discussing to have a net positive effect”. I explicitly disclaimed the view (act consequentialism) which involves evaluation of this question at all. The last time you tried to summarize my view in this way, I specifically said that this is not the right summary. But now you’re just repeating that same thing again. What the heck?
… ok, I take it back, it seems like you are reading my comments and apparently (sort of, mostly) understanding them… but then where the heck did the above-quoted totally erroneous summary of my view come from?!
Anyhow, to answer your question… uh… I already answered your question. I explain some relevant “rules” in the thread that I linked to.
That having been said, I do want to comment on your outlined model a bit:
First of all, “the effects on Bob” of my comments are Bob’s own business, not mine.
Let’s be clear about what it is that we’re not discussing. We’re not talking about “effects on Bob” that are of the form “other people read my comment and then do things that are bad for Bob” (which would happen if e.g. I doxxed Bob, or posted defamatory claims, etc.). We’re not talking about “effects on Bob” that come from the comment just existing, regardless of whether Bob ever read it (e.g., erroneous and misleading descriptions of Bob’s ideas). And we’re definitely not talking about some sort of “basilisk hack” where my comment hijacks Bob’s brain in some weird way and causes him to have seizures (perhaps due to some unfortunate font rendering bug).
No, the sorts of “effects” being referred to, here, are specifically and exclusively the effects, directly on Bob, of Bob reading my comments (and understanding them, and thinking about them, etc.), in the normal way that humans read ordinary text.
Well, for one thing, if Bob doesn’t want to experience those effects, he can just not read the comment. That’s a choice that Bob can make! “Don’t like, don’t read” applies more to some things than others… but it definitely applies to some obscure sub-sub-sub-thread of some discussion deep in the weeds of the comment section of a post on Less Wrong dot com.
But also, and more generally, each person is responsible for what effects reading some text has on them. (We are, again, not talking about some sort of weird sci-fi infohazard, but just normal reading of ordinary text written by humans.) Part of being an adult is that you take this sort of very basic responsibility for how things affect your feelings, and if you don’t like doing something, you stop doing it. Or not! Maybe you do it anyway, for any number of reasons. That’s your call! But the effects on you are your business, not anyone else’s.
So in this hypothetical calculation which you allude to, “the effects on Bob” (in the sense that we are discussing) should be weighted at exactly zero.
If that perception is correct, then it is right and proper to perceive it thus. If it is incorrect, then the one who mis-perceives it thus should endeavor to correct their error.
Maintaining good epistemics in the face of pressure is an important rationality skill—one which it benefits everyone to develop. And the “pressure” involved in arguing with some random nobody on LessWrong is one of the mildest, most consequence-free forms of pressure imaginable—the perfect situation for practicing those skills.
If our hypothetical Bob thinks this, then he should have no problem at all disengaging from the discussion, and ignoring all further replies in the given thread. “I think that this is not important enough for me to continue spending my time on it, so thank you for the discussion thus far, but I won’t be replying further” is a very easy to thing to say.
Then perhaps these hypothetical readers should develop and practice the skill of “not continuing to waste their time reading things which they can see is a waste of their time”. “Somehow finding yourself doing something which you don’t endorse” is a general problem, and thus admits of general solutions. It is pointless to try to take responsibility for the dysfunctional internet-forum-reading habits of anyone who might ever read one’s comments on LessWrong.
I don’t have the strongest grasp of what rule consequentialism actually means. I’m also very prone to thinking about things in terms of expected value. I apologize if either of these things has lead to confusion or misattribution.
My understanding of rule consequentialism is that you choose rules that you think will lead to the best consequences and then try to follow those rules. But it is also my understanding that it is often a little difficult to figure out what rules apply to what situations, and so in practice some object level thinking about expected consequences bleeds in.
It sounds like that is not the case here though. It sounds like here you have rules you are following that clearly apply to this decision to post the tenth comment and you are not thinking about expected consequences. Is that correct? If not would you mind clarifying what is true?
I would appreciate it if you could outline 1) what the rules are and 2) why you have selected them.
Hm. I’d like to clarify something here. This seems important.
It’s one thing to say that 1) “tough love” is good because despite being painful in the short term, it is what most benefits the person in the long term. But it is another thing to say 2) that if someone is “soft” then their experiences don’t matter.
This isn’t a perfect analogy, but I think that it is gesturing at something that is important and in the ballpark of what we’re talking about. I’m having trouble putting my finger on it. Do you think there is something useful here, perhaps with some amendments? Would you like to comment on where you stand on (1) vs (2)?
I’ll also try to ask a more concrete question here. Are you saying a) by taking the effects on Bob into account it will lead to less good consequences for society as a whole (ie. Bob + everyone else), and thus we shouldn’t take the effects of Bob into account? Or are you saying b), that the effects on Bob simply don’t matter at all?
Sure, that’s basically true. Let’s say, provisionally, that this is a reasonable description.
I’m talking about stuff like this:
Now, is that the only rule that applies to situations like this (i.e., “writing comments on a discussion forum”)? No, of course not. Many other rules apply. It’s not really reasonable to expect me to enumerate the entirety of my moral and practical views in a comment.
As for why I’ve selected the rules… it’s because I think that they’re the right ones, of course.
Like, at this point we’ve moved into “list and explain all of your opinions about morality and also about everything else”. And, man, that is definitely a “we’re gonna be here all day or possibly all year or maybe twelve years” sort of conversation.
Well, yes, those are indeed two different things. But also, neither of them are things that I’ve said, so neither of them seems relevant…?
I think that you’re reading things into my comments that are not the things that I wrote in those comments. I’m not sure what the source of the confusion is.
Well, things don’t just “matter” in the abstract, they only matter to specific people. I’m sure that the effects on Bob of Bob reading my comments matter to Bob. This is fine! Indeed, it’s perfect: the effects matter to Bob, and Bob is the one who knows best what the effects are, and Bob is the one best capable of controlling the effects, so a policy of “the effects on Bob of Bob reading my comments are Bob’s to take care of” is absolutely ideal in every way.
And, yes indeed, it would be very bad for society as a whole (and relevant subsets thereof, such as “the participants in this discussion forum”) if we were to adopt the opposite policy. (Indeed, we can see that it is very bad for society, almost every time we do adopt the opposite policy.)
Like, very straightforwardly, a society that takes the position that I have described is just better than a society that takes the opposite position. That’s the rule consequentialist reasoning here.
This is starting to feel satisfying, like I understand where you are coming from. I have a relatively strong curiosity here; I want to understand where you’re coming from.
It sounds like there are rules such as “saying things that are true, relevant and at least somewhat important” that you strongly believe will lead to the best outcomes for society. These rules apply to the decision to post the tenth comment, and so you follow the rule and post the comment.
So to be clear would it be accurate to say that you would choose (a) rather than (b) in my previous question? Perhaps with some amendments or caveats?
I’m trying to ask what you value.
And as for listing out your entire moral philosophy, I am certainly not asking for that. I was thinking that there might be 3-5 rules that are most relevant and that would be easy to rattle off. Is that not the case?
Right.
I guess I’d have to think about it. The “rules” that are relevant to this sort of situation have always seemed to me to be both very obvious and also continuous with general principles of how to live and act, so separating them out is not easy.
I think your comment here epitomizes what I value about your posting. I’m not here to feel good about myself, I want to learn stuff correctly the first time. If I want to be coddled I can go to my therapist.
I also think that there’s a belief in personal agency that we share. No one is required to read or comment, and I view even negative comments as a valuable gift of the writer’s time and energy.
I wish I could write as sharply and itintelligently as you do. Most people waste too many words not saying anything with any redeeming factor except social signaling. (At least when I waste words i try to make it funny and interesting, which is not much better but intended as sort of an unspoken apology)
Yep, makes sense.
I hope that, at least, you now have some idea of why I view such suggestions as “why can’t you just write more nicely” as something less than an obviously winning play.
EDIT: The parent comment was heavily edited after I posted this reply; originally it contained only the first paragraph. The text above is a reply to that. I will reply to the edited-in parts in a sibling comment.
(Sorry about the edit Said, and thank you for calling it out and stating your intent. I was going to DM you but figured you might not receive it due to some sort of moderation action, which is unfortunate. I figured there’d be a good chance that you’d see the edit and so I’d wait a few hours before replying to let you know I had edited the comment.)
In another comment thread, you write:
This is quite deceptive, as in this very post, you cite something I wrote on my own Shortform as contributing to your decision to ban me.
I was talking about making top-level posts or shortforms with object-level objections. Calling an author a “coward” for banning you from their post, is of course not what I was talking about as something that is fine to do on your shortform.
Your conduct on the site matters, and the conduct you displayed in that thread seemed bad, independently of where it occurred. I didn’t mean to imply that just because something is a top-level post or shortform that it’s OK to write whatever you want there, we still have standards here. But it’s the site-moderators job to enforce those standards, not the original post author’s, which is what this whole post is about.
Or to state it a different way: No, there is nothing deceptive here. The fact that you can make a shortform or top-level post does indeed help lower the cost of authors deleting comments from their posts. It doesn’t change what we do in terms of site-wide moderation.
FYI, that link goes to a very weird URL, which I doubt is what you intended.
The link you had in mind, I am sure, is to this thread. And your description of that thread, in this comment and in the OP, is quite dishonest. You wrote:
In ordinary conversation between normal people, I wouldn’t hesitate to call this a lie. Here on LessWrong, of course, we like to have long, nuanced discussions about how something can be not technically a lie, what even is “lying”, etc., so—maybe this is a “lie” and maybe not. But here’s the truth: the first use of the word “coward” in that thread was on Gordon’s part. He wrote:[1]
And I replied:
To describe this in the way that you did, has the obvious connotation to any reasonable reader that I, unprompted, went and wrote something like “Gordon is a coward for banning me from his posts!”. That’s the picture that someone would come away with, after reading your characterization. And, of course, it would be completely inaccurate.
You have, again and again in this post and the comments here, relied on this sort of tendentious description and mischaracterization. This is yet another example.
I think that you know perfectly well how dishonest this sort of thing is. The fact that you have to rely on such underhanded tactics to make your case should give you pause.
P.S.: I must note that I am currently rate-limited in my ability to comment (a result of the LW mods strong-downvoting my comments in this thread, e.g. this one). How does this square with “Said, feel free to ask questions of commenters or of me here”?
FWIW, I do not think that Gordon’s part in that particular exchange was problematic or blameworthy in any way.
I’m not sure the more accurate picture is flawless behavior or anything, but I do think I definitely had an inaccurate picture in the way Said describes.
That… is an unfortunate application of the auto rate-limiting system. I’ll see whether I can disable it easily for you. I’ll figure out something in the next few hours, but it might require shipping some new code and disentangling the surrounding systems a bit. Sorry about that. Definitely not intended.
I just enabled “ignore rate limits” for this post (which I assumed we’d want for this post to avoid this issue but I think I’m the only one that remembered that feature existed)
Yep, I did indeed not remember that feature. Thank you!
Ooops, sorry, fixed.
I agree that the context is helpful and importantly makes the “coward” aspect more understandable. I also omitted other context that I think makes the thing I intended to communicate with “you called him a coward” a more reasonable summary[1]. I think I am sold that it would have been better for me to give a bit more context and to summarize things a bit differently. I don’t overall agree that it was substantially misleading, but I agree I could have done better.
For example this paragraph:
Which I also consider to follow the same unhelpful patterns described in the OP.
In response to a comment, moderator @Ben Pace describes me as:
I consider this to be a libelous characterization. To say that it is false is an understatement.
Ben Pace should either support this accusation with cited quotes from me (which he will be unable to do, of course), or else retract it and apologize.
Here’s the comment where you say it’s normatively correct to “stick one’s heels in and be unwilling to budge on a position regardless of reason or argument”.
My best guess is that you just wrote that in order to write something that reads as a definitive slap-down, but regardless it is a pretty silly thing to write (and made me respect you less!).
It would seem that you didn’t follow the link in that text. My best guess is that you just wanted to score a point against me, and didn’t bother to check or figure out what it was that I was actually saying.
If you had, you would have read the comment that I linked to, the key section of which I will now quote:
As you can see, this is very much not “epistemically committed to not changing his mind in the face of evidence and argument”.
I’ll take that retraction and apology now, please.
Once again I think you kind of don’t understand good communication and are being silly. That comment is recommending people not change their minds in comment threads. Like, you go on in that comment to say:
And yet here you demand I immediately change my mind in response to reason and evidence.
You go on to praise Schopenhauer when he writes about how to have discourse, including (for example) this line:
That comment of mine you’re responding to is one where I describe talking to you often as similar to “an LLM in whose system prompt it was written that it should not be able to either agree with or understand your point”. Zack Davis describes that position as “laughable, obviously wrong, and deeply corrosive” but then you go on to link to yourself repeatedly endorsing not changing your mind in comment sections and say that such behavior is “normatively correct”. You guys have got to decide whether the position is laughable or obviously correct! These are consequentially different! There may be some light between your position and my description but they’re quite close.
I’m not invested in more litigation of your behavior and so on. We’ve made our call.
I think this is an improperly narrow interpretation of the word now in the grandparent’s “I’ll take that retraction and apology now.” A retraction and apology in a few days after you’ve taken some time to cool down and reflect would be entirely in line with Schopenhauer’s advice. I await the possibility with cautious optimism.
I mean, I do think that (recall that I actually did the experiment with an LLM to demonstrate), but do you understand the rhetorical device I was invoking by using those exact words in the comment in question?
You had just disparagingly characterized Achmiz as “describing [interlocutors’] positions as laughable, obviously wrong, deeply corrosive, etc”. I was deliberately “biting the bullet” by choosing to express my literal disagreement with your hyperbolic insult using those same words verbatim, in order to stick up for the right to express disagreement using strong language when appropriate.
Just checking that you “got the joke.”
Please note that I had put a Disagree react on the phrase “normatively correct” on the comment in question. (The react was subsequently upvoted by Drake Morrison and Habryka.)
My actual position is subtler: I think Schopenhauer is correct to point out that it’s possible to concede an argument too early and that good outcomes often result from being obstinate in the heat of an argument and then reflecting at leisure later, but I think describing the obstinacy behavior as “normatively correct” is taking it way too far; that’s not what the word normative means.
I looked over the comment, while I think it was a reasonable stab at what I was trying to say, it didn’t quite meet my standards for expressing my opinion versus stating a verifiable fact, so I’ve edited it.
I’m happy that my comment is more accurate and I’m grateful to Said’s comment for that effect; I do think his comment about not-changing-your-mind-in-response-to-reason-or-argument being ‘normatively correct’ was misleading about his epistemic state (e.g. this also communicated the wrong thing to Zack).
I endorse this interpretation.
Who could possibly be disagree-voting with this comment? What does it even mean to disagree with me saying that I endorse someone’s interpretation of my own words?
I think that this position is reasonable, but wrong. On the other hand, perhaps we do not actually disagree on this point, as such, because of the next point:
I disagree. Elaborating:
Suppose that we are considering some class of situations, and two possible behaviors, A and B, in such a situation; and we are discussing which is the correct behavior in a situation of the given class. It may be the case (and we may claim) that any of the following hold:
Behavior A is always correct; behavior B is never correct.
Behavior B is always correct; behavior A is never correct.
In all cases, either A or B is fine; both are acceptable, neither is wrong.
In certain situations of the given class, A is correct and B is wrong; in other situations of the given class, B is correct and A is wrong.
In certain situations of the given class, A is correct and B is wrong; in other situations of the given class, B is correct and A is wrong; in yet other situations of the given class, either A or B is fine.
In certain situations of the given class, A is correct and B is wrong; in other situations of the given class, either A or B is fine.
In certain situations of the given class, B is correct and A is wrong; in other situations of the given class, either A or B is fine.
In which of these scenarios would you assent to the claim that “A is normatively correct”?
My own position is that the answer is “all of the above except #2 and possibly #7”. (I can see a definitional argument based on #7, but I am not strongly committed to including it in the definition of “normative”.)
When discussing rationality, I typically use the word normative to refer to what idealized Bayesian reasoners would do, often in contrast to what humans do.
(Example usage, bolding added: “Normatively, theories are preferred to the quantitative extent that they are simple and predict the observed data [...] For contingent evolutionary-psychological reasons, humans are innately biased to prefer ‘their own’ ideas, and in that context, a ‘principle of charity’ can be useful as a corrective heuristic—but the corrective heuristic only works by colliding the non-normative bias with a fairness instinct [...]”)
As Schopenhauer observes, the entire concept of adversarial debate is non-normative!
“[N]ot demand[ing] [...] that a compelling argument be immediately accepted” is normatively correct insofar as even pretty idealized Bayesian reasoners would face computational constraints, but a “stubborn defense of one’s starting position—combined with a willingness [...] to change one’s mind later” isn’t normatively correct, because the stubbornness part comes from humans’ innate vanity rather than serving any functional purpose. You could just say, “Let me think about that and get back to you later.”
Understood. However, I am not sure that I approve of this usage; and it is certainly not how I use the word (or, to a first approximation, any words) myself. My comments are, unless specified otherwise, generally intended to refer to actually-existing humans.[1]
Indeed, so either we take this to mean that any normative claims about how to conduct such debates are necessarily meaningless, or else we allow for a concept of normativity that is not restricted to idealized Bayesian reasoners (which, I must remind you, are not actually real things that exist). Now, I am not saying that we should not identify an ideal and try to approach it asymptotically, but surely it makes no sense to behave as if we have already reached that ideal. And until we have (which seems unlikely to happen anytime soon or possibly ever), adversarial debate is a form of epistemic inquiry we will always have with us. So there must be right and wrong ways to go about doing it.
“Stubbornness” is just the refusal to immediately update. Whether it makes sense to continue defending a point, or whether it makes more sense to say “let me think about it and get back to you”, is contingent on various circumstantial aspects of the situation, the course of the discussion, etc. It does not seem to me like this point can make any substantive difference.
Perhaps not necessarily endorsing the actually existing distributions of certain traits in humans, perhaps generalizing slightly to “actually-existing humans but also very similar entities, humans under small plausible modifications, etc.”, but essentially still “actual humans”, and definitely not “hypothetical idealized Bayesian reasoners, which don’t exist and who maybe (probably?) can’t exist at all”.
We are not talking, here, about some subtle point of philosophy, or some complicated position on the facts of some difficult and specialized subject. You made a claim about my views. I disclaimed it. Either you have some support for your claim, or it is unsubstantiated. It would seem that you have no support for your claim.
When one makes objectionable factual claims about another person, and is unable to substantiate those claims, the correct thing to do is to retract it and apologize. (This does not preclude making the claim again in the future, should it so happen that you acquire previously unavailable support for the claim! But currently, you have nothing—and indeed, less than nothing—namely, a statement from me disclaiming your characterization, and nothing from you to support it.)
If you refuse to do so, the only appropriate conclusion is that you are someone who knowingly lies about other people’s views.
Schopenhauer was here describing human behavior, having just two sentences prior (in a section which I bolded for emphasis) characterized said behavior as “the weakness of our intellect and the perversity of our will”. To say of this merely that it is “Schopenhauer when he writes about how to have discourse” is disingenuous.
I am not a “you guys” and I reject the notion that I have to decide anything for anyone else. Zack is perfectly capable of speaking for himself, as I am capable of speaking for myself. If I endorse someone’s point, I’ll say so.
What is “normatively correct” is what I described in the section I quoted in the grandparent. I have been completely clear about this view, never wavering from it in the slightest. The idea that there is some sort of ambiguity or vaccilation here is entirely of your own false invention.
Your characterization of me as “an LLM in whose system prompt it was written that it should not be able to either agree with or understand your point” is obviously insulting and, more importantly, unambiguously and verifiably false.[1], insofar as I have agreed with people often.
This again is an erroneous and deceptive characterization.
The bottom line is that, once again, your claim about my views is demonstrably false, and you have no support for it whatsoever. You should retract it and apologize to me.
And not just in the trivial “actually I am a biological human and not a large language model” sense.
I mean, I disagree, but doesn’t seem like further conversation will be productive.
Commenter @Lukas_Gloor writes:
This impression is mistaken. I have no such “distaste”.
On the contrary, my comments are often aimed at helping to “pin down” those bits. Asking probing questions, asking for examples, asking authors to explain how they are using certain words, etc., is precisely the correct way to do such “pinning down”.
@Lukas_Gloor continues by saying:
The unacknowledged possibility here, for any given post in this category, is that the post had no coherent point, and was in fact confused, nonsensical, simply wrong, or some combination thereof. In such a case, it is entirely correct that I should not “get closer to seeing the point”, and anyone who did “get closer to seeing the point” of such a post would be making a mistake—becoming more wrong instead of less wrong. In other words: if “there is no there there”, then “getting there” is wrong, and “not getting there” is correct.
The way that we can distinguish between this possibility, and the possibility that there is something there but it’s difficult to verbalize or to characterize coherently, is precisely via discussion, conceptual analysis, examination of intent behind word choices, examination of examples (or trying to think of examples), etc. And if we find “something there”, the same methods are the means by which we can develop and refine it.
It’s not worth making most posts where implied central points are not coherently understood by the author. But some things that look similarly are gesturing at fruitful puzzles, which are too difficult for the author to solve by the time they’ve written the post, or possibly ever. This shouldn’t of course involve the author claiming to have a coherent picture already.
The incentives should carve out a niche for this kind of communication, acknowledging practical impossibility to distinguish. The difficulty to distinguish from worthless nonsense is already too much of a punishment, so any incentives should actually want to point the other way, possibly on orthogonal or correlated considerations that can actually be resolved in practice.
Of course. I wholly agree with this.
Empirically, this is clearly false. The track record of LW in the past ~8 years makes this very clear.
That seems hard to judge from anything empirical, you’d need to compare with the counterfactual where there is little difficulty in distinguishing and so good tentative takes don’t need to live in squalor among piles of worthless nonsense (especially well-presented “high effort” worthless nonsense). So I don’t see how it can possibly be clearly false, and similarly I don’t see how it can possibly be clearly true, since it has to rely on low-legibility intuitive takes about unobservable counterfactuals.
Also, the problems from the difficulty to distinguish are both on the side of the authors (in the form of incentives) and on the side of the readers (in the form of low availability of good content of this type, and having to endure the worthless nonsense without even being able to know if it actually is worthless nonsense).
Commenter @Alexander Gietelink Oldenziel writes:
@habryka replies:
This seems to me to be spectacularly disingenuous, given the discussion in this subthread, where @habryka writes:
(See the linked subthread for more details.)
My guess is this is clear to most readers, but to clarify, I said “but there is basically no engagement [of Said] with Duncan that played any kind of substantial role in any of this”. I.e. I don’t think your comments in any threads with Duncan played much into this decision.
Duncan’s complaints about you also preceded his direct conflict with you, as far as I can remember. The quote I dug up for you just happened to have been made in that context (which shouldn’t be very surprising, as people rarely publicly complain about other users on LessWrong in the precise way you were asking about).
This just doesn’t make any sense whatsoever. I don’t understand how you can say this and expect to be believed, when you cite Duncan as one of your examples of “many top authors citing [Said] as a top reason for why they do not want to post on the site” (and indeed as the only example for which you’ve been able to provide any kind of unambiguous proof)—and that, in turn, is your explanation for what “the stakes” of this decision are!
I only asked about it because you made the claim in the first place! These are your words! You wrote: “many top authors citing him as a top reason for why they do not want to post on the site, or comment here”. For you to now pretend that I am making some weird demand for some weird form of evidence for some weird reason, is yet another example of disingenuousness.
Come on, please, you can figure out what I mean with those sentences.
Yes, many top authors cite you as a reason for why they do not want to post on the site. This does not mean that your specific interactions with the one specific author we are talking about are the reason why many top authors are doing so. These are two at most weakly correlated points. It’s really not hard to imagine how they could come apart. Those interactions are not even the reason why Duncan said the same thing, as his complaints started substantially before the relevant thread where he said it more explicitly.
I literally clarified this two comments ago, and in like 4 other comment threads you are involved in. Most of these authors cite you, in private communication. This is not a particularly complicated thing to understand. Separately, at least some top authors have publicly complained about you, but that isn’t the load-bearing part of why I wrote the above, or why I believe what I believe. We’ve already discussed this a bunch. I don’t know what you are trying to say here.
… nor did I claim this, nor do I think that you have claimed this…? What in the world would make you think that this is what we’re talking about? This just seems like a non sequitur.
I am not sure how this confusion came about, but let me try to clarify. When you say—in a public statement, the purpose of which is to stake out a publicly known, “official” position, supported by publicly made arguments—that “many top authors” cite me as “a top reason for why they do not want to post on the site”, you have two (non-mutually-exclusive) options:
Point to public statements by relevant authors
Allude to private communications from relevant authors
If you just do #2, this is basically worthless. “People have said these-and-such things to me in private”, offered with no corroboration of any kind, may be hold some small weight, in explaining and justifying your actions, if you’ve built up a large amount of trust and good will. But no more.
And so, understandably, you have not relied only on #2, but have also attempted to do a lot of #1. You have attempted to point to several examples of authors who’ve supposedly made such statements, either in public, or in a way that’s verifiable. This, again, would be an entirely understandable thing to do, given your aforementioned purpose.
Naturally, when you cite such evidence, you should expect that you will be expected to actually provide it, and that it will be examined, to verify that it is what you say it is!
And in fact, much of this purported publicly available or verifiable evidence, which you have attempted to provide, has, upon examination, turned out to be flimsy at best.
Importantly, this also means that your word about the existence of the private evidence (which cannot be publicly verified) is cast into doubt.
Commenter @alkjash writes:
This would seem to be a highly dubious claim, at best.
I have looked through the entirety of @alkjash’s posting/commenting history and used the site search feature, and have found only the following interactions involving the two of us:
Discussion of a “multiple agent” model, in comments on post “Internal Double Crux”
Obviously nothing day-ruining or even unpleasant here.
Brief discussion of input device effectiveness, in comments on post “Design 2”
Ditto.
A couple of interactions in comments on post “New moderation tools and moderation guidelines” (concerning UX of records/traces of moderation actions)
Not really “criticism of writing”, but rather a discussion of costs/benefits of certain aspects of site design.
Comment (just one; no discussion) on dialogue “Originality vs. Correctness”
Technically “criticism”, I guess, but more like “agreeing with part of what was said, disagreeing with another part”. Nothing unpleasant here either.
These four cases seem to be the totality of all my interactions with @alkjash, throughout the entirety of my tenure here on Less Wrong.
So where are these “unpleasant interactions” that “ruined [your] day”…?
I am surprised there are so few—perhaps in that calculation I was mistakingly tracking some comments you made in other posts that I didn’t directly participate in.
Nevertheless, every single example you bring up above was in fact unpleasant for me, some substantially so—while reasonable conclusions were reached (and in many cases I found the discussion fruitful in the end), the tone in your comments was one that put me on edge and sucked up a lot of my mental energy. I had the feeling that to interact with you at all was to an invitation to be drawn into an vortex of fact-checking and quibbling (as this current conversation is a small example of).
It is not surprising to me that you find all of these conversations unobjectionable. To me, your entrance to my comment threads was a minor emergency. To you, it was Tuesday.
I stand by the claim that a plurality of my unpleasant interactions on this site involved you—this is not a high bar. I do not recall another user with whom I had more than one.
I remain confused as to whether banning you is the correct move for the health of the site in general. The point I was trying to make was along the lines of [for a class of writers like alkjash, removing Said Achmiz from LessWrong makes us feel more relaxed about posting].
Some of them look positively cooperative to me, and do not look like Said thought ill of you in any way, nor that it would look bad if you replied or didn’t reply to those messages.
Am I correct in stating that the main reason it is unpleasant and scary is because you felt socially threatened in those moments? As in, your standing in the social group you considered LessWrong to be, and that you considered that you were a part of? And a part of the obligation to reply involved a feeling of wanting to defend yourself and your standing in the group, especially since a gigantic part of what gives someone status in a sphere like LW is your intellectual ability, or your ability to be right, or to not look dumb at the very least?
That is at least how I feel when I try to simulate why I’d feel the way you claim to have felt. And I empathize with that feeling.
This may be a relevant factor, and I can be rightfully accused of being too status-conscious and neurotic about such things, but I don’t think it’s really the issue. For one, I honestly expect to come out of most interactions with Said having won status points, not lost them.
One of the main reasons is his general snideness. Let me try to spell out a couple things.
1. I unfortunately inhabit and am socially adjusted to a huge swath of the world where the discourse norms require that [nothing that could be perceived as negative/directly contradictory is ever said publicly of anyone]. I come to LW to take a cold shower once in a while, to be woken up from the hostile epistemic jungle I live in. Within this analogy, afaict Said operates under the norm that absolute zero is the perfect temperature, and that’s a little too cold for me.
In any other culture/relationship I participate in, if someone communicated to me in the style that Said takes, for example making a literature search through my published work and making point-by-point rebuttals of claims therein, it would be an extreme shock (now I recognize that this exact example is extremely unfair as he is responding to my direct negative characterization of his behavior, but imo the top-level post contains enough better examples). My mind would immediately jump to [this person is out to get me e.g. fired] or [I have really committed a catastrophic and irreversible error]. Over the years here, perhaps three quarters of my brain have acclimated to the idea that the discourse norms that LWers follow, and Said follows extremely, is a reasonable way to have a conversation, and the other quarter is still screaming in terror.
2. On another level, I personally relate to LW as a casual forum for truth-seeking-related banter, emphasis on the word casual. Especially as someone who emphasizes [originality] and [directional correctness] over [correctness per se], I find the conversations that Said leads me into to be hostile to the way I think out loud. I like to have conversations where we both toss back and forth 99 vaguely truthy-sounding ideas and one of them happens to be a deep insight, and the other 98 are irrelevant or verifiably false and immediately brushed under the rug. However, if I try to converse with Said like this, every comment I make is directed into an scrutinization of the 98 irrelevant/false things. In my world, if I have produced one true, interesting insight in all of this, I’ve made progress. In my model of Said’s, I have sinned 98 times.
I do realize point 2 is not the way LW is intended to operate, and this mode of banter is absolutely not compatible with serious discussions of people’s long-term reputations with consequences on the level of multi-year banning. Let nobody ever give me moderator privileges beyond my personal blog. I am not using this frame at all to justify said banning. I am only using it to explain why I personally prefer it.
Well I would say the whole reason LW mods are banning Said is that we do, in fact, want LW to operate this way. (Or, directionally similarly to this). I do also want wrong ideas to get noticed and discarded, and I do want “good taste in generating ideas” (there are people who aren’t skilled enough at casual idea generation for me to feel excited about them generating such conversation on LW). But I think it’s an essential part of any real generative intellectual tradition.
I really appreciate your introspection on this, but suggest that status consciousness is probably still a large part of what’s going on, because if you weren’t worried about looking bad in front of an audience (i.e., looking like you didn’t have an answer to one of Said’s questions/objections), you could simply ignore or stop replying to him if you thought his style of conversation was too extreme for your tastes, instead of feeling like his “entrance to my comment threads was a minor emergency”.
I wanna flag, your use of the word “simply” here is… like, idk, false.
I do think it’s good for people to learn the skill of not caring what other people think and being able to think out loud even when someone is being annoying. But, this is a pretty difficult skill for lots of people. I think it’s pretty common for people who are attempting to learn it to instead end up contorting their original thought process around what the anticipated social punishment.
I think it’s a coherent position to want LessWrong “price of entry” to be gaining that skill. I don’t think it’s a reasonable position to call it “simply...”. It’s asking for like 10-200 hours of pretty scary, painful work.
The way I feel about this reply is “I am an adaptation-executor, not a fitness optimizer”? Your reading is a perfectly valid psychoanalysis of my perfectionism around comments sections and compulsions to reply, but as far as I recall my internal dialogue stopped at “this is quite a tiresome minor emergency, I will have to tread several steps more carefully than usual in replying.”
Let me reiterate that my previous reply is expanding on the reasons I personally found interacting with Said difficult. None of our conversations were remotely ban-worthy behavior.
sure, the prestige challenge seems to be relevant, but I feel like the problem is that said also makes dominance threats and those suck. (I feel like there’s something going on where a big enough prestige challenge spills into dominance, or something? stated in the spirit of exploratory ramblings that may or may not have an insight somewhere downstream of them)
edit: actually I don’t want to deal with this right now, bye. I resisted my urge to delete this comment’s contents
What in the world is this about…?
Your model of my view bears very little resemblance to my actual view.
I have two questions:
If you found the discussion fruitful in the end, why is that not the bottom line? (Especially if this fruitfulness involved “reasonable conclusions” being reached?)
(Here I am talking about “the bottom line” only with respect to your interaction with me directly, ignoring any effects like the benefit of a comment exchange to other commenters or to readers, etc.)
You say that you “had the feeling that to interact with [me] at all was to an invitation to be drawn into an vortex of fact-checking and quibbling”. But as we can see from the linked examples, there generally was not, in fact, any “vortex of fact-checking and quibbling”.[1] So it would seem that the “feeling” you had was false-to-fact. Do you agree with this evaluation?
Indeed, in the exchange at the first link, the putative roles were reversed—you were questioning me about what I believe, etc. Of course, I have no objection to this! But it hardly serves as an example of me drawing anyone into any vortices of quibbling…
This is another comment where I do not understand the downvoting.
I spent ~2 hours reading the comments, and I just want to say I regret it. The comments are painful to evaluate in an unbiased way (very combative) and overall doesn’t really matter.
What you’re doing here is conflating contempt based on group membership with contempt based on specific behaviors. Sneer-clubbers will sneer at anyone they identify as a Rationalist simply for being a Rationalist. Said Achmiz, in contrast, expresses some amount of contempt for people who do fairly specific and circumscribed things like write posts that are vague or self-contradictory or that promote religion or woo. Furthermore, if authors had been willing to put a disclaimer at the top of their posts along the lines of “This is just a hypothesis I’m considering. Please help me develop it further rather than criticizing it, because it’s not ready for serious scrutiny yet.” my impression is that Said would have been completely willing to cooperate. But possible norms like that were never seriously considered because, in my opinion, LW’s issue is not not the “LinkedIn attractor” but the “luminary attractor”. I think certain authors here see how Eliezer Yudkowsky is treated by his fans and want some of that sweet acclamation for themselves, but without legitimately earning it. They want to make a show of encouraging criticism, but only in a kayfabe, neutered form that allows them to smoothly answer in a way that only reinforces their status. And Oliver Habryka and the other mods apparently approve of this behavior, or at least are unwilling to take any effective steps to curb it, which I find very disappointing.
You say:
Out of curiosity, I clicked on the first post that Said received a moderation warning for, which is this Ray’s post on ‘Musings on Double Crux (and “Productive Disagreement”)’. You might notice the very first line of that post:
It’s not the exact kind of disclaimer you proposed here (it importantly doesn’t say that readers shouldn’t criticize it) but it also clearly isn’t claiming some kind of authority or fully worked-out theory, and is very explicit about the draft status of it. This didn’t change anything about Said’s behavior as far as I can tell, resulting in a heavily-downvoted comment with a resulting moderator warning.
There are also multiple other threads (which I don’t have the time to dig up) in which Said made his position clear that indeed it is most important for him to provide feedback at the formative stages of an idea, for if he does not criticize it at close to the earliest possible time, the idea will have found enough social momentum to be incorrigible. This makes me in-general very skeptical that any such disclaimers would be successful. My best guess someone trawling through the archives would find someone who attempted this technique, and that this did little to ward of the usual results.
(I don’t super want to litigate this in much more detail, but I figured I would share these two datapoints that I could easily share that make me think your model of the dynamics here is off. I am not here claiming for either of these two that there is some kind of closed-and-shut case)
That’s fair enough, but it only demonstrates that he wasn’t willing to unilaterally and proactively do this, not that he wouldn’t have cooperated if you had imposed it on him. It’s baffling to me that you spent hundreds of hours on this issue without (apparently) even attempting to impose a compromise that would have brought out the best in both Said and his detractors.
That’s not true! Did you read the very first moderation conversation that we had with Said that is quoted in the OP?
After the comment above, we reached out to Said privately and Elizabeth had something like an hour long chat conversation with him asking him what we need to do to get him to change his behavior, to which his response was:
What do you suggest we do after such a response? The response seems to me pretty clearly show that he wasn’t/isn’t interested in compromising on these dimensions.
This is a response to asking him to be, in full generality, more tactful or “prosocial,” not to asking him to follow a clear bright-line rule. I’ll grant that Said may not be willing or able to be tactful enough in all situations, yet there seems to be rough consensus that his comments have a lot of value in other situations, so my suggestion would be to try to delineate those situations.
Random thought: maybe there could have been disproportional gains got by getting Said to involve more humor in his messaging and branding him the official Fool of Lesswrong.com?
It seems the community indeed gets service out of Said shooting down low quality communication, and limiting that form of communication socially to his specific role maybe would have insulated the wider social implications, so that most value would have been preserved each way, maybe?
My model of Said would have been offended by being asked to take on a Jester role as a condition of staying on LessWrong, but perhaps he would have been interested?
I do think the background culture that we’re in is one that doesn’t really have this role anymore, because comedians are sometimes major public figures whose opinions are treated with respect; people don’t dismiss what they say just because they’re frivolous.
I think it still sounds intriguing to try it out sometime. (With a different user who is funnier than Said.)
I volunteer as tributeUnfortunately I do not think there are clear bright-line rules that would fix these problems, as clear bright-line rules are close to non-existent in social situations like this (what is the clear and ambiguous bright line rule that would delineate a sufficient note at the top?). The closest that I found was to allow authors to moderate their own post, which we did implement and Said has been vehemently opposed to in ways I talk about in the OP.
Beyond that, I also don’t think your characterization that this was in response to a fully-general request to be more tactful or “prosocial” is accurate. The question Elizabeth asked just before most of the quote above is:[1]
To which Said responded with most of the quoted section above. Like, yeah, Elizabeth didn’t propose a specific clear line rule, but this exchange to me does not leave the door open for suggesting such clear rules, or suggesting further compromises.
If Said and Elizabeth both agree I could share the full transcript, but don’t want to do so unilaterally.
Of course you’re right that there are no perfectly clear bright-line rules that would completely fix these problems, the question is whether there is a clear enough rule that would ameliorate the problems. You would have substituted a judgment call on whether all of Said’s comments across the whole site were on net beneficial, with a much easier judgment call on whether a given note is sufficient or not. And whether Said’s comments were net beneficial was evidently such a close call that you dithered about this decision for literal years, which would seem to indicate that a relatively small nudge would have tipped his contributions to the positive side.
Also, if the door to Said changing his behavior was so completely closed, I’m really confused about what all those hundreds of hours were spent on.
Just to be clear, this overall does not strike me as a close call. The situation seems to me more related to the section on “Crimes that are harder to catch should be more socially punished” plus some other dynamics. My epistemic state changed a lot over the years, but not in a way that would result in thin margins, but in a way where some important consideration, or some part of my model would shift, and this would switch things from “in expectation this is extremely costly” to “in expectation what Said is doing is quite important”.
Something being a difficult call to make does not generally mean that it also needed to be a close call.
I mean, we tried anyways, but I do think it was overall a mistake and a reasonable thing to do at the time would have been to respond with “well, sorry, if you as a commenter are already pre-empting that you are not willing to change basically at all based on moderator feedback, then yeah, goodbye, farewell, goodluck, we really need more cooperation than that”. Elizabeth advocated for this IIRC, and I instead tried to make things work out. I think Elizabeth was ultimately right here.
I think the people who talk as though the contested issue here is Said’s disagreeableness combined with him having high standards are missing the point.
If it was just that (and if by “posts that are vague” you mean “posts that are so vague that they are bad, or posts that are vague in ways that defeat the point of the post”), I’d be sympathetic to your take. However, my impression is that a lot more posts would trigger Said’s “questioning mode.” (Personally I’m hesitant to use the word “contempt,” but it’s fair to say it made engaging more difficult for authors and they did involve what I think of as “sneer tone” sometimes.)
The way I see it, there are posts that might be a bit vague in some ways but they’re still good and valuable. This could even be because the post was gesturing at a phenomeon with nuances where it would require a lot of writing (and disentanglement work) to make it completely concise and comprehensive, or it could be because an author wanted to share an idea what wasn’t 100% fleshed out but might have already been pointing at something valuable. I feel like Said not only has a personal distaste of that sort of “post that contains bits that aren’t pinned down,” but it also seemed like he wouldn’t get any closer to seeing the point of those posts or comments when it was explained in additional detail. (Or, in case he did eventually see the points, he’d rarely say thanks or acknowledged that he got it now). That’s pretty frustrating to deal with for authors and other commenters.
(Having said all that, I have not had any problems with Said’s commenting in the last two years—though I did find it strongly negative and off-putting before that point. And to end with something positive, I liked that Said was one of the few LessWrongers who steered back a bit against Zvi’s very one-sided takes on homeschooling—context here.)
If a post starts off vague and exploratory, on a topic that isn’t very easy to think/write about, it would make sense that it usually couldn’t be clarified enough to meet Said’s standards within a few back-and-forth comments.
Yes, but I think that’s in part because of the nature of intellectual progress, and in part because there are so few people like Said who is incentivized (by his own personality) to push back hard and persistently on this kind of post (so people are not used to it). I think it’s also in part due to the tone that he typically employs, which he theoretically could change, but that seems connected with his personality in a way that we seemingly couldn’t get one without the other.
Sure, I don’t mean to imply that Said is beyond reproach, or that all his comments were necessarily good. Just that I think insofar as this post was an attempt to address the reasons Said-defenders felt he needed so much defending, it has failed.
As a tenured (albeit perhaps now ‘emeritus’) member of the “generally critical commentator crew”, I think this is the wrong decision (cf.). As the OP largely anticipates the reasons I would offer against it, I think the disagreement is a matter of degrees among the various reasons pro and con. For a low resolution sketch of why I prefer my prices of ‘pro tanto’ to the moderators:
I don’t think Said’s commenting, in aggregate, strays that close to the sneer attractor. “Pointed questions with an undercurrent of disdain” may not be ideal, but I have seen similar-to-worse antics[1] (e.g. writing posts which are thinly veiled/naked attacks on other users, routine abuse of subtext then ‘going meta’ to mire any objection to this game with interminable rules-lawyering) from others on this site who have prosecuted ‘campaigns’ against ideologies/people they dislike.[2]
The principal virtue of Said doing this for LW is calling bullshit on things which are, in fact, bullshit. I think there remains too much (e.g.) ‘post’-‘rationalist’ woo on LW, and it warrants robustly challenging/treating with the disdain it deserves. I don’t see many others volunteering for duty.
The principal cost is when this misfires, so the author ends up led into a subthread wasteland by Said thanks to him taking an odd, (unintentionally?) tendentious? line of questioning. In principle, this should not be that costly: if a comment asks for a clarification where I am confident other readers would agree with me the questioner is being very dumb, willfully obtuse, or making a ‘cheap shot’, I can ignore them without fear of third parties making an adverse inference. This applies whether this is the initial exchange or 1+ plys deep.[3]
Even if ‘in principle’ this is fine, maybe (per OP) the scales tilt the other way in practice. But I don’t think doing more to be ‘writer friendly’ by squashing putative gadflies like Said gets you enough marginal high-quality community content to be worth it across the scales of the (admittedly-nebulous) ‘taxing criticism’.
The track record of hewing moderation to cater for authors has not borne much fruit so far: the mod tools were introduced in large part to entice Eliezer back. He’s not. I think I recall a lot of mod effort has been spent on mediating spats between high profile users/contributors, but I think the usual end result is these dramatis personae have faded away.
#
Regardless of all that, it’s your website, and I’m barely a stakeholder worth considering (my last substantial contribution was over a decade ago). I wouldn’t hold it against Pace or Habryka if from arguments we had on the EA forum[4] they thought my judgements better to inverse, and my absence satisfying.
I expect I will continue participating very little in LW, although Said getting banned has little to do with it. Basically I don’t find enough yield of ‘good generalist (i.e. not principally focused on AI) content’ here anymore. I think Said incrementally helped by reducing the volume/prominence of not-so-good generalist content, so this seems a step in the wrong direction.[5] Happy days, and more fool me, if the future proves me wrong.
Although I appreciate mitigating circumstances (and an isolated case), moderator behaviour on this post has been ‘similar-to-worse antics’ too. It seems bad form to (as it appears Habryka has done) strong downvote a large number (most?) comments by Said in the threads he is arguing with him in (can I do this if I get into a fight with someone with much lower vote power than me?). Ditto (as Pace did) use site-admin info to score points against a dissenting user he wanted to be snide to, especially when that user seems to be dissenting in the manner OP requested they do.
I’m not giving examples to avoid prompting a subthread wasteland on whatever I bring up. If widely disbelieved and crucial to the discussion, I am open to being cajoled into naming some names
Aside: it is perhaps unfortunate ‘tapping out’ is the lingo for dropping a discussion. In martial arts, (notwithstanding the gloss on the wiki that it can mean ‘one is tired, or at risk of injury, or has simply had one’s fill’) tapping is typically an admission of defeat.
Regardless of the lingo, there is still the advantage of having the ‘last word’ (cf. OP). I could be odd, but I feel this gets outweighed by the much lower visibility of (e.g.) the 5th+ nested comment being seldom more than ‘you and your interlocutor’. In terms of ‘discussion as social fight’, whoever got ratioed in the first 1-2 back and forths on the thread is the loser, even if they make the last ‘rebuttal’.
FWIW I don’t have the impression that the EA forum is more ‘linkedin-y’ than LW nowadays. Besides roughly similar levels of spats/drama, many of my comments there are much meaner towards the OP than Said’s, and I haven’t had the moderators generally ‘on my case’ about them (e.g.).
But there are secular explanations which likely overdetermine, e.g.:
Maybe we’ve run out of useful general things to say, so useful conversation inevitably gets more and more sub-specialised?
Maybe the noughties internet just developed a lot of surplus for places like LW, but nowadays gifted writers want to cultivate their own substack or whatever.
Maybe things have professionalized so the typical commenter who could share interesting takes on AI alignment (or whatever) as an amateur has been recruited to a think tank as a professional.
(Said doesn’t have much lower vote-power than me, I think he currently has a strong-vote strength of 8 or 9, and I have a strong-vote strength of 10)
I also didn’t strong-downvote most of his comments in that thread/on this post, though I have strong-downvoted a few. I do stand behind those votes, as they are the result of reading each of his comments in detail, and only voting so when I do really think they are quite bad. Even invoking standards in which one should justify one’s votes publicly, which I don’t generally ascribe to, I have just written a 15,000 word post about why I think Said’s comments are deserving of downvotes. I also upvoted some of Said’s comments in these discussions.
I can see a somewhat weak case why in this discussion I should not vote, but I really don’t buy it overall. These are bad comments. The resulting discussion has produced little value, much frustration, and in a fitting way for this post has I think resembled many of the worst things that go wrong in discussions with Said. Feel free to upvote them if you do like them, two users of your karma should be enough to cancel them out, so it doesn’t take that much (though I would encourage you to only do that if you actually think they are good[1]). It wouldn’t make sense for the perceived balance of votes to end up skewed away from the usual voting patterns on the side, on this post of all places, and in as much as voting is trying to measure something like net-approval on the site (which to be clear it is at most an extremely noisy approximation of) it would be pretty distortive of that for me to not vote in these threads.[2]
I don’t think Ben intended to score points, though I agree his comment was not written in a way that made that clear (and also gave me a mildly bad taste). Separately a user deleting and undeleting their account is public knowledge and can be derived from e.g. archive.org archives of any pages where they commented, there is no site-admin info necessary to derive that information, all it would take is more time (and I would answer any other query about DB info that can be derived such to anyone on LW).
Beyond that, I have hit my time limit on engaging with comments on this post, so I won’t respond further. I would appreciate some courtesy[3] to keep discussion to the principles and decision-level instead of critiques of my personal behavior, as indeed much of the cost of moderation is measured in having any moderation-adjacent action be torn apart and be requested to be justified or defended.
Though I think adjustive-voting is also a fine use of the voting system, so if you merely think they are too far downvoted, it’s IMO a reasonable choice to use your votes to move them to where you think they are supposed to be
I have a short-ish section on bad voting patterns in the OP which I could contrast with what I think is going on here, but I am going to skip for now due to time constraints. I do think that a strong-vote of 10 is often distortively strong, and in many of these threads wish I had a medium-strong vote, but alas the complexity has so far not been worth it to implement such a feature.
though of course in as much as something seems egregious, you and others should feel free to call it out
I think it would be a good norm to never strong-downvote someone you’re debating, no matter how carefully you’ve read them, because it’s just too easy to be biased in such situations, and it makes people suspicious/resentful/angry (due to thinking that the vote is biased/unfair, and having no recourse or ability to hold anyone accountable), which is not conducive to having calm and productive discussions. Rather surprised that you don’t support or follow this.
I somewhat agree and apply a substantially higher bar to downvoting people I am debating, especially on non-moderation discussions (in the threads on this post, I abstained from voting for a lot of his replies to me, though less on his replies to others, e.g. the Vaniver thread).
As a site-moderator my job is often more messy and I think allows less of this principle than it does for others. In many cases where I would encourage other people to just “downvote and move on”, I often do not have that choice, as the role of actually explaining the norms of the space, or justifying a moderation decisions, or explaining how the site works, falls on me. In many cases, if I didn’t vote on those comments, the author would not get the appropriate feedback at all.
Another thing that I think is important is to have gradual escalation. It is indeed better for someone to be downvoted before they are banned. As a moderator, voting is the first step of moderation. Moderators should vote a lot, and pay attention to voting patterns, and how voting goes wrong, because it’s a noisy measure and the moderators are generally in the best position to remove the most distortions. Most moderation should be resolved via just the voting system.
There is a whole post I would like to write about trying to somehow grapple with the concept of “contempt of court”. A hugely common experience of any moderator on the internet is that you write some moderation message trying to pretty gently enforce some principle or rule, and are met with extreme contempt and aggression. Having some ability for moderators to enforce some level of cooperativeness in moderation discussion is important. The cost of someone being a dick to moderators is indeed very high, both in terms of the general ability of the site to have any norms and principles, and because moderator energy is often the limiting factor for a functional forum. I currently consider downvoting people who are dicks to moderators really important. Like, if I didn’t do it, a lot of my moderators would quickly quit, I would probably quit moderating myself, and the consequences for the site would be enormous.
And my current take is due to a bunch of underdog dynamics in online discussions, people get to be extreme dicks to moderators without naturally getting downvotes. Conduct that would routinely get someone downvoted and rate-limited to oblivion, when aimed at moderators or authority figures gets routinely tolerated. I understand people’s instinct to do it, but I can’t do my job that way, and if I had to give up the tool of voting in moderation discussion, I do not think I could do this job.
And to be clear, I have a lot of sympathy with concerns about “contempt of court enforcement mechanisms”. It seems like a pretty dangerous set of tools. The current set of tools on the site we have kind of suck, though also, I think Said is a huge outlier in how much he was contemptuous of any attempts to moderate him, so it might just be less of an issue in the future.
(Remember that, IIRC, we still have the misfeature that you can’t strong upvote your own comments. Perhaps you mention this, I haven’t read much of your comment or these threads)
I haven’t mentioned it, and I do hate it as a feature, and we should change it. Having the default outcome of two people being angry at each other being that everyone is somewhere in the super negatives does seem pretty dumb.
Its not obvious this is dumb to me. If two people are super angry at each other, that conversation seems likely to create more heat than light.
I’m not a regular user of LW, but I wanted to weigh in anyway. The style of endless asymmetric-effort criticism can be very wearing on people with perfectionist or OCD-like tendencies. I am, sadly, one of those people. In my head is a multi-faced voice of rage and criticism that constantly second guesses my decisions and thoughts and says many of the same things about anyone else’s work or life or decisions. This kind of thing is one of the faces, able to find fault in anything and treat it all with importance both high and invariant over any sort of context. I think the voice is something like an IFS firefighter. In fact, here he is now:
It’s exhausting and demoralizing. This is far from the only component, to be fair, and I actually don’t doubt that Said is honestly trying to make the world a better place… but this particular flavor of criticism is not making things better. It can be done well, but this isn’t it. This makes people, over time and without really noticing it at first, get a submodule installed in their heads that constantly criticizes, second guesses, attempts to justify, apologizes for, pre-emptively clarifies, and talks itself out of things in every domain of life.
...though I guess that may be a natural attractor state for minds like this. Still, while the circumstances for the ban are unfortunate, I think it was correct. For anyone who wants to do anything, having enough energy to do it is key, and things like this just drain it. It’s like fighting a wall of molasses.
Semi-related, from Richard_Kennaway
I’m very good friends with someone who is persistently critical and it has imo largely improved my mental health, fwiw, by forcing me to construct a functioning and well-maintained ego which I didn’t really have before.
Okay, this is definitely true, too. I also do enjoy a more consistent ability to justify my actions and beliefs, which is far from nothing and not worth writing off. I guess, for me, the missing ingredient is that the other person gets it once I make a logical and reasonable justification; if that happens, I think it’s fine to be friends with a very critical person.
I feel vaguely good about this decision. I’ve only had one relatively brief round of Said commenting, but it’s not free.
If Said returns, I’d like him to have something like a “you can only post things which Claude with this specific prompt says it expects to not cause <issues>” rule, and maybe a LLM would have the patience needed to show him some of the implications and consequences of how he presents himself.
I also feel vaguely good about it, but I feel decisively bad about this suggestion!
I’ve been investigating LLM-induced psychosis cases, and in the process have spent dozens of hours reading through hundreds if not thousands of possible cases on reddit. And nothing has made me appreciate Said’s mode of communication (which I have a natural distaste towards) more than wading through all that sycophantic nonsense slop!
In particular, it has made it more clear to me what the epistemic function of disagreeableness is, and why getting rid of it completely would be very bad. (I’m distinguishing ‘disagreeableness’ here from ‘criticism’, which I believe can almost always be done in an agreeable way.) Not something I really would have disagreed with before (ha), but it helps me to see a visceral failure mode of my natural inclination to really drive the point home.
I think there’s a happy medium between these two bad extremes, and the vast majority of LWers sit in it generally.
FWIW, no need to anonymize if this was an attempt to lightly protect me, this was me:
Also FWIW, I’ve had some genuinely positive interactions with Said in the last couple weeks. I was surprised as anyone. I don’t know if it’s because he was trying to be on his best behavior or what, but if that was how Said commented on everything, I’d be very happy to see him unbanned (I had even had the idea that if we continued to have positive interactions I would unban him after whatever felt like enough time for me to believe in the new pattern).
This does not seem weird to me at all. LW is a scary place for many newcomers, and many posts get 0–1 comments, and one comment that makes someone feel dumb seems likely to result in their never posting again.
I strongly agree that it’s important to avoid the LinkedIn attractor; I simultaneously think that we should value newcomers and err at least a little bit on the side of being gentle with them.
From my very much outside view, extending the rate limiting to 3 comments a week indefinitely would have solved most of the stated issues.
I have two feature requests in response to this class of concerns.
Problem statement: authors feel pressure to respond to comments even if they think responding is low value. Meanwhile, readers hesitate to comment because they do not wish to impose costs (response costs or social costs) on the author.
Solution: authors can use emoji be able to tag a comment to indicate why they are choosing not to respond. LessWrong already has this via emoji responses, and I have used them for this purpose (as a comment author). A beneficial side-effect is that emojis can’t be karma-voted, further reducing social pressure. My feature requests aim to improve this avenue.
Tiny: remove emoji question marks. For example, the emoji that says “Seems offtopic?” can just be “Offtopic”, like “Soldier Mindset”. This would make the emoji better express something like “I am not responding because this is (in my opinion) offtopic” rather than “This might be offtopic but I am not sure, l am not responding because I can’t be bothered to find out”. This suggestion also applies to:
Too Combative? → Too Combative
Misunderstands Position? → Misunderstands Position
“Not worth getting into? (I’m guessing it’s probably not worth the time to resolve this?)” → “Not worth getting into (I don’t think it’s worth the time to resolve this)”.
Larger: highlight author emojis. If a post author gives an emoji response to a comment, this can be given more visibility. For example, instead of “🙏 2” in the bottom right of a comment, it could display “🙏 Habryka 1″. This would also cover emoji responses from the author of the parent comment.
Concrete example: I ended a discussion with Said on vegan weirdness points with a “Not worth getting into” emoji, and I think this was a good choice that saved us both time.
More positive example: I replied to a reply about schitzophrenia with a “Changed my Mind” emoji and an upvote, and felt good about praising a helpful reply without reducing the signal-to-noise ratio.
Relatedly, I have a draft of a “Bowing out of this thread” react with a bowing monopoly-man, that I think is a more polite ending to a thread than “Not worth getting into”.
Anecdotally, I would perceive “Bowing out of this thread” as a more negative response because it encapsulates both topic as well as the quality of my response or behavior of myself. While “not worth getting into” is mostly about the worth of the object level matter. (Though remarking on behavior of the person you’re arguing with is a reasonable thing to do, I’m not sure that interpretation is what you intend)
I think a more generic react/emoji like that could be a good addition for cases where none of the existing emoji fit, and for people who don’t want to be specific about why they are not responding further, for whatever reason. Thanks for working on that.
I don’t think “Not worth getting into” is impolite in any way. Replying to a comment consumes time, and it will frequently be the case that someone’s time is better spent on other activities. Since there is no obligation on the author to respond (per habryka’s post), they can’t be considered impolite for not responding further.
I believe you are outright incorrect about how many people will receive this, then! Many people will, in fact, receive that statement as hostile, which will lead to it being underused by people who are concerned with politeness, which will lead to it correctly being perceived as statistically rude.
I intend to say that “Not worth getting into” is not rude on LessWrong, as a normative statement, rather than a descriptive statement about what LW readers will think. Partly it is a normative statement about what (I think) LW culture is, and partly it is a normative statement about what (I think) LW culture should be.
Arguments for what LW culture is
When an activity gives an explicit affordance for something, using it is not rude by default. Destroying someone’s base is rude in a game of Legos, but not rude in a game of Starcraft. Since LW has a “Not worth getting into?” react, using it is not rude by default. If the LW react changed to “Not worth getting into”, that would also be not rude by default. The reacts are therefore a surprising tool for shaping LW culture.
Also, as I mentioned above, there is no obligation on the author to respond, per habryka’s post. Any response, even a react, is supererogatory. By reacting the author has given the commenter (and other readers) strictly more information than they are obliged to, at no cost. It is a free gift. Since we don’t believe in Copenhagen Ethics we can’t fault an author for not doing more just because they did something instead of nothing.
Arguments for what LW culture should be
This is partly covered by The LinkedIn attractor in habryka’s post:
It’s not even bad that someone should occasionally say something that is not worth responding to. Threads have to end at some point. There are many things that are worth saying but are not worth responding to. If a culture is at the point where pointing out a not-even-bad thing about a single comment is considered impolite and/or hostile, that culture is deep into The LinkedIn Attractor, and doomed as a rationalist endeavor.
Also, I go back to my problem statement above. It’s valuable for authors to have easy ways to gracefully indicate why they are not responding. LW culture should support authors in choosing how much time to spend responding to comments. Failure to do so results in fewer authors, and greater use of moderation tools to block comments as a preventive. It also results in fewer comments by people respectful of the time of authors, without discouraging comments by people who are not so respectful (eg, allegedly, Said). This is bad.
On statistical rudeness
Frequent users of “Bowing out of this thread” reacts and “Not worth getting into” reacts will be slightly different, statistically. That doesn’t make the reacts polite or rude. By analogy, people who wear cowboy hats are statistically different to those who wear bowler hats, but that doesn’t make the hats polite or rude.
Is this worth getting into?
This comment was worth it for me because it’s potentially upstream of LW features & culture, and LW potentially has an impact on the risk of extinction. If you don’t think it’s worth getting into further I will not consider this impolite, rude, or hostile.
Clarification re “emojis can’t be upvoted or downvoted”, which @the gears to ascension and @mruwnik would bet is false. I mean that if I give an emoji react to a post saying “not worth getting into”, I can’t get karma votes on that emoji, whereas if I give a text reply to a post saying the same thing, it can get karma votes and replies from people who think it is getting into. Since I don’t want to get into meta-discussions about whether a comment is worth replying to, or have such choices judged by others, that is a feature. I’m interested if I’m missing something here.
I think the reactions are just because de-facto you can vote on reacts:
That’s what the vote button in the bottom right corners are for. You can downvote a react, and if net votes go to zero, it disappears.
Good point. I further feature-suggest that if the author replies “Offtopic” and someone downvotes that it is ontopic, I still want to see the author’s react. Maybe that could be “📌 Habryka −1″.
That is literally what happens! Hidden reacts show up in a small menu in the bottom right corner, and when you hover over that you can see both “upvotes” and “downvotes” on the react:
I wasn’t clear (I should have made a mockup, sorry). I don’t think the author’s react should be in hover-text, I think it should be inline text visible by default without the reader needing to hover anywhere. At least on desktop, anyway. Currently just the react and the number is visible by default.
I started posting to Less Wrong in 2011, under the name Fezziwig. I lost the password, so I made this account for LW2.0. I quit reading after the dustup in 2022, because I didn’t like how the mods treated Said. I started up again this summer; I guess I came back at the wrong time.
Object-level I think Said was right most of the time, and doing an important job that almost no one else around here is willing to do. A few times I thought of trying to do the same thing more kindly; I’m a more graceful writer than he is, so I thought I had a good shot. But I never did it, because I don’t believe Said’s tone was ever really the issue: what upset people, what tended to produce those long ugly subthreads, was when he made a good point that couldn’t be persuasively answered, and didn’t get distracted by evasions. There isn’t, actually, a kind way to ask for examples from someone who doesn’t have any.
That’s not to say all his comments were like that; some really were just bad. But the bad ones didn’t tend to spawn demon threads. People didn’t have to reply, because they knew that he was wrong, instead of just wishing it.
Also, I think that if ”...voting ends up dominated by a small interest[10] group without broader site buy-in, and with no one being able to tell that is what’s going on...distorting people’s perception about the site consensus in particularly high-stakes contexts”, then the right approach is to weaken or eliminate remove the misfeature that’s distorting the signal, rather than giving up on using votes to signal site norms or guide ban decisions. But then, I also wouldn’t throw out a claim like that without checking the voting data, if I had access to it.
Anyway, it’s not my problem. I’ve deleted everything I posted here and I’m not going to visit again. The admins could undo it but, Jim, I’d like to ask that you not. I don’t want anything I’ve made to be part of what lesswrong.com is now.
For the record, I personally found the way Said engaged to be annoying at points because I would have preferred to make the same complaint about the post with more tact so that the author might actually fix the issue within 24 hours instead of taking that long to figure out what it was.
Since he had made the comment, it was often difficult to add mine (since the author was busy in another thread!). I don’t really expect any decline in quality on LessWrong because I think the job Said was doing is ~fungible (though I hope everyone reading this comment could have guessed that I know that people are not).
Thank you for your hard work! Neither the decision itself nor the work of justifying it and discussing it is particularly easy, as I can say from experience. I appreciate you putting so much effort into trying to keep the site healthy.
This post has comments from some people who agree and from some people who disagree with the decision. It seems worth making explicit that this discussion may underrepresent the amount of people who agree, because some of the people with the strongest agreement would be the ones who’ve already left the site because of Said.
I don’t think this sort of abstract analysis is valid. For instance, you could argue that it may underrepresent the people who disagree, because it’s become increasingly clear that Said-style criticism is unwelcome on LW in the past few months, as the conflict has escalated.
Think it’s just really hard to know without doing a lot of work.
I think it’d be more accurate to say that “there’s this other factor too” rather than “this analysis is not valid”?
There are a number of comments expressing disagreement that have gotten a fair number of upvotes, so it doesn’t look to me like expressing disagreement would be unwelcome.
Edited to add: I should also mention that I don’t think this comment came out of “abstract analysis”. It came from the fact that back when I banned Eugine Nier, I then reached out to a user who had left the site because of him to let them know their harasser was banned. The user’s response was basically, “glad to hear, but I still don’t feel like coming back”. So at least in one previous case, users who had left because of a now-banned user were actually permanently out of the resulting discussion.
I agree, the meta-point of selection bias is valid but the direction of bias is unclear.
Not strictly related to this post, but I’m glad you know this and it makes me more confident in the future health of Lesswrong as a discussion place.
It’s been many years since I’ve been active on LW, but while I was, Said was the source of a plurality of my unpleasant interactions on this site. Many other commenters leveraged serious criticisms of my writing, but only Said consistently ruined my day while doing so.
I cannot say whether this decision was right in the end, but will attest that seeing this post made me happy.
Tangential feature request: allow people to embed other comments in posts natively. This article uses screenshots of LessWrong to display conversations, but this does not responsively size them for mobile users and makes it harder to copy-paste stuff from this post, which a native implementation could fix.
Yeah, I think better content embedding is a thing that would be useful for a few reasons.
FWIW, I am sad both that Said has been banned, and that Duncan left.
I’d also like to say that a lot of Duncan’s conflict-oriented nature in the Duncan/Said moderation post and comments, as well as other posts where they interact was precisely because of the issues described in the section But why ban someone, can’t people just ignore Said?, in that it’s much less easy to ignore comments than a lot of people realize.
While it doesn’t explain all of the conflict, I do think it explains a non-trivial amount of the reason why Duncan has the tendency to get into conflict with Said, because there’s a social norm that criticism has to be responded to in order to make your post be correct.
I don’t plan on doing this, but who is on the board of Lightcone Infrastructure? This doesn’t seem to be on your website.
Daniel Kokotajlo, Vaniver and me! (We should update our website sometime)
This outcome makes me a little sad. I have a sense that more is possible.
How would this situation play out in a world like dath ilan? A world where The Art has progressed to something much more formidable.
Is there some fundamental incompatibility here that can’t be bridged? Possibly. I have a hunch that this isn’t the case though. My hunch is that there is a lot of soldier mindsetting going on and that once The Art figures out the right Jedi mind tricks to jolt people out of that mindset and into something more scout-like, these sorts of conflicts will often be resolvable. From The Scout Mindset:
I’m not sure what those Jedi mind tricks would look like of course, but I’ll hypothesize that they’d look something like what is recommended in Nonviolent Communication (NVC). Specifically, starting off making sure each side feels that they are understood before moving on to any attempts at argument. Or maybe people in the field of conflict resolution have some answers.
I don’t believe Said is having very contingent bad interactions with tons of commenters and the mod team, but rather that this is a result of a principled commitment to a certain kind of forum commenting behavior that involves things like any commenter being able to demand answers to questions at the risk of the post-author’s status, holding extreme disdain and disrespect for interlocutors while being committed to never saying anything explicitly or even denying that it is the case, and other things discussed in the OP, that in combination are extremely good at sucking energy out of people with little intellectual productivity as a result. My guess is that if we played the history of LW 2.0 over like 10 more times making lots of changes to lots of variables that seem promising or relevant to you, the outcome would’ve eventually been the same basically each time.
To take your proposal, I think it’s likely that Said has literally written a disdainful comment about NVC — yep, I looked a little, Said writes “It has been my experience that NVC is used exclusively as a means of making status plays”, and here is a longer thread of Said strongly criticizing an aspect of NVC — so your first proposal will not only not succeed, but in fact be aggressively rejected and treated with hostility.
Yeah, I hear ya. I don’t see any low hanging fruit here such as attempting to apply NVC. What I mean is that I think there are solutions out there that we haven’t discovered yet. And not in the distant sense of “have nanobots rewire Said’s brain”; I suspect that The Art really does contain solutions that aren’t super distant or high-tech.
I notice that you’re reaching for it too.
It’s promising that despite Said’s criticisms of NVC, and that you’re doing exactly what you described as the thing “something like” NVC, it’s going quite well so far. That’s a harder test, but when you get the principles right, things work regardless. It doesn’t matter if Said’s stance is “NVC is about status plays” because you aren’t doing status plays, and it shows.
Respect.
I have some suggestions for mechanistic improvements to the LW website that may help alleviate some of the issues presented here.
RE: Comment threads with wild swings in upvotes/downvotes due to participation from few users with large vote-weights; a capping/scaling factor on either total comment karma or individual vote-weights could solve this issue. An example total-karma-capping mechanism would be limiting the absolute value of the displayed karma for a comment to twice its parent’s karma. An example vote-weight-capping mechanism would be limiting vote weight to the number of votes on a comment. The total-cap mechanism seems easier to implement if LW just records the total karma for a comment rather than maintaining the set of all votes on it. Any mechanism like those described has some issues though, including the possibility of users voting on something but not seeing the total karma change at all.
RE: Post authors (and commenters) not having enough information about the behavior of specific commenters when deciding whether/how to engage with them, and the cruelty of automatically attaching preemptive dismissals to comments; it does not seem more cruel to publicly tag a user’s comments with a warning box saying “critiques from this user are usually not substantive/relevant” than to ban them. This turns hard-censorship into soft-censorship, which seems less dangerous to me, and also like it could be more easily applied by moderators without requiring hundreds of hours of deliberation.
RE: Going after only the most legible offender(s) rather than the worst one(s); Giving users and moderators the ability to mark a commenters interactions throughout a thread as “overall unproductive/irrelevant/corrosive/bad-faith” in a way that allows users to track who they’ve had bad interactions with in the past, and allows moderators better visibility into who is behaving badly even when they have not personally seen the bad behavior (with the built-in bonus of marking examples). These marks should only be visible to the user assigning them and to moderators for what I think are obvious reasons. A more general version of this system would be the ability to assign tags to users for a specific comment/chain (e.g. “knowledgeable about African history”, “bad-faith arguer” that link back to the comment which inspired the tag. Such a system is useful for users who have a hard time remembering usernames, but could also unfortunately result in ignoring good arguments from people after a single bad interaction.
Meta: I am new and do not know if this is an appropriate place for site-mechanic suggestions, or where to find prior art. Is there a dedicated place for this?
This is a good place! There isn’t a super central repository for this. You can take a look at the Site Meta and LW Moderation tags to find other posts in the same reference class.
I’d like you to consider removing votes entirely, to be subsumed entirely by reacts. These allow more nuance and are importantly not anonymous. I believe this is importantly more similar to how humans in the ancestral environment would think about and judge community contributions, in ways that are conducive to good epistemics and incentives. (There are also failure modes that would be important to think about, such as a ‘seal of approval’ dynamic.)
Aggregating this well for the purposes of sorting and raising to attention would be tricky, but seems plausibly doable and worth it to me.
However, I expect that this is already something you have thought about a lot more than I have and have apparently not decided to do, so I am also curious to hear why not.
Many people would be much less inclined to vote if it was fully public, so you would lose a lot of signal.
would the signal to (noise + adversarial signal) improve?
edit: thinking about it more, I’m unsure, seems plausible the answer is no. (react was added before edit)
I have an information question about the 3 year ban, in that I’d like to ask why you chose a temporary ban over a indefinite ban?
In particular, given the history of Said Achmiz here, including the case where he did the same behavior he was rate-limited before, I am a bit confused on what you are hoping to do by simply performing a temporary ban in lieu of an indefinite ban:
3 years is long enough that LessWrong might be a very different place by then, or Said might have changed quite a bit, or maybe things will have actually sunk in in 3 years. I think it’s likely for the threshold for rebanning to be pretty low in 3 years, but it seemed to me potentially worth it to leave some door open in the more distant future.
Sure. I think this is a good decision because it:
Makes LessWrong worse in ways that will accelerate its overall decline and the downstream harms of it, MIRI, et al’s existence.
Alienates a hard working dude who puts in a lot of hours and professional expertise outside of commenting on LessWrong.
Frees up Said to work on other projects which are a more valuable use of his time.
I can’t really thank you for banning him because I’m fond of him, but I can thank you for making the mistake of banning him. A mistake I can only thank you for because I know it will not be reversed.
May God bless you and inspire similar decisions in the future. :)
Free Hearing, Not Speech seems like a better approach to me. Give users the affordances to automatically see the kinds of comments they want to interact with, or the conversations they want to have. Users don’t have to see what they believe is bad-faith, low-effort, rude criticism. Users who disagree that can see said criticism. Let people moderate the conversations they want to see themselves, but do not let them moderate the conversations others want to see.
Maybe this doesn’t fully resolve your issues with @Said Achmiz, Habryka. He can still call people out for doing this, and damage their reputation in ways you think are unjust. Fine. But he, at worst, does that rarely. The bulk of your problem with him is that he’s writing really aggravating comments that make it costly for people to post on LW, and more-over claiming that these costs should be paid. Which, in turn, makes it more likely that people who interact with him (believe) they will pay such costs. Letting them hide comments from Said should fix that. Or, if you want to go further, hide comments like Said’s ahead of time by toggling some “no-Gadflys” setting on comment-visibility.[1]
AFAICT, this also seems good by Said’s lights, in the sense that everyone else can see his comments by default. They can see his critique, and judge its merits for themselves. Which are frequently good IMO. But others should be free to make that judgement themselves.
EDIT: Said, I believe you should be able to reply to this comment. If not, my apologies for discussing what you may believe in a comment you can’t reply to. In which case, I can put anything you want to say in an Edit to this post. Just tag me elsewhere, DM me, or, IDK, I can share my email with you by DM or something.
(I think it is possible to build this now.)
If you put a bunch of work into a post, knowing that most other people are seeing a low-quality but very forceful/sneer-y criticism which you haven’t replied to is a lot of discouragement.
Auto-mute posts replying to the posts you don’t like.
How does that help?
Low priors on this happening + out of sight, out of mind basically resolve the discouragement issue IMO.
Like, this works well enough on Twitter. There are all sorts of people saying stupid stuff that I know would enrage or discourage me. But I’ve muted enough nonsense that I don’t have to see it, and I’ve got no interest in seeking it out. Why not do that here, but better?
I think one of the core problems here is authors not believing in “out of sight, out of mind”. If people reasonably believe that the author not responding to a comment is evidence that the author can’t respond to that comment, the visibility of that comment for readers but not the author still generates reader-impressions that the author doesn’t want.
Of course the flip side of this is also problematic—if people reasonably believe that the absence of critical comments is a sign of quality, the invisibility of criticism generates reader-impressions that the readers don’t want. And so moderation involves judging which of those is more important.
Yeah, you’re right.[1] Your point holds strong bc. on LW because you’re trying to reach the entirety of the LW user base with your posts, competing with other posters for the singular front-page/popular comments/recent discussion sections. That’s an important disanalogy to e.g. Twitter or Mastodon. (Another is lack of emphasis on followers/following.) Kinda reminds me of an agora? I’m guessing that’s the sense in which Said compared LW to a public forum.
But @habryka’s kinda giving me the sense that he doesn’t want LW to be like an agora. Honestly, I’m not sure what he wants LW to be. IIRC, sometimes he mentions LW like being a university, sometimes like an archipelago of cultures. But those are more decentralized than LW is. Like, you’ve got all these feeds which give everyone the same reading materials. Which is trying to expose everyone’s work to the whole LW reader base by default. Which is more like a public forum in my mind. So yeah, mixed vibes. Habryka, if you’re reading this, I’d be interested in reading your thoughts on what sort of social system LW is and should be, and how that differs from the examples I gave above.
Returning to my proposal, I still think a lot of the costs people bear when replying to low-effort/disdainful criticism can be addressed by various forms of muting. But definitely not all the costs, and perhaps not even most.
@plex, if you were pointing at the same thing Vaniver was pointing at, then you were right, too.
I of course have lots of thoughts! My current tentative take is that ideally I would like LessWrong to be a hierarchy of communities with their own streams and norms, which when they produce particularly good output, feed into a shared agora-like space (and potentially multiple levels of this).
Reddit is kind of structured like this. Subreddits each have their own culture, but the Reddit frontpage and people’s individual feeds are the result of the most upvoted content in each Subreddit bubbling up to a broader audience.
I think Reddit is lacking a bunch of other infrastructure to do this properly for the things I care about, and I would like a stronger universal culture than Reddit currently has, but it’s a decent pointer for one structure that seems promising to me (LessWrong is far away from this for a bunch of different reasons that I could go into, but would take time, so I am going to keep it at this for now).
Thank you for the answer! I do share the sense that LW is far from where Reddit is at, and (separately?) from where you tentatively want it to be. If you’re considering writing this up in more detail, then I’d be glad to read it.
The difference between twitter and lesswrong is that the twitter is more like a random chaos maelstrom, and LessWrong is more like a community. Some random guy saying something obnoxious on twitter is different from someone who’s going to have a lot of repeat interactions and is affecting your reputation in a shared social circle.
(Also, plex’s argument was basically specifically arguing why this strategy didn’t reliably work, and IMO your comment just sort of restated your original argument without engaging with his additional argument)
Completely omitted my post about Said, and my response to your responses on that post.
https://www.lesswrong.com/posts/SQ8BrC5MJ9jo9n83i/said-achmiz-helps-me-learn , cross posted at Data secrets lox.
I’ll have to follow his comments elsewhere.
My philosophy is no more “totalizing” than that which is described in, say… the Sequences. (Or, indeed, basically any other normative view on almost any intellectual topic.) Do you consider Eliezer to have constantly been “making dominance threats” in all of his posts?
EDIT: Uh… not sure what happened here. The parent comment was deleted, and now this comment is in the middle of nowhere…?
You could run an LLM every time someone tries to post a comment. If a top level reply tries to nitpick something that isn’t key to the post, the LLM could say “It seems like you are tying to nitpick a point that’s not central? Do you really want to write post this comment?”
While I hope it’s gotten less, I do think I have written some comment myself in the past criticizing posts for minor issues that aren’t central to the post. For me, I think a gentle nudge from an LLM asking “It seems like you are nitpicking something minor, do you really want to do that?” would seem like it would reduce posts that fall into that bucket when I’m in the mode of “something said something wrong on the internet, it’s not central to their post but it’s wrong, so let’s write a comment pointing out that it’s wrong”.
The same mechanism could also be used for other classes of comments that you want to have less of. An LLM can easily analyze whether a comment falls into that bucket and then ask the user whether they really want to post the comment.
I think the crux is what feeds the dangerous norms, and what makes norms dangerous. I expect that when considered in detail, Said or most others with similar behaviors aren’t intending or causing the kinds of damage you describe to an important extent. But at the same time, norms (especially insane ones) feed on first impressions, not on detailed analyses.
Such norms might gain real power and do major damage if they do take hold. I don’t believe they have, and so the damage you are describing is overstated, but the risk the norms represent is real. Said might be an unusually legible referent when doing a search for foundations of such unfortunate norms, but it’s not necessarily correct that he’s a meaningful contributing factor to the present extent to which these norms persist, and there isn’t necessarily a live dynamic where these norms are increasing their hold over time rather than remaining at some annoying and not completely harmless background level.
So this decision seems like a case of the most forbidden technique, where the effort gets directed to the most legible entity related to a problem, even as it remains unclear if there is a causal influence, or if the avoidable part of the problem is important in its current form. Once the more legible signs of the problem are gone, the problem becomes less salient, but it doesn’t necessarily go away (or improve at all) if it actually has many other causes. Vigilance fades, and if the problem does get worse (so that it becomes actually important to mitigate), it does so more silently, getting a better shot at becoming a catastrophe.
The affective conflationary alliance discussion is interesting (it likely would’ve been better standalone). This has implications for the architecture of internal judgement, dangers of forming conflationary alliances among your own understandings when making holistic judgements. This is a distinction between non-specific contemplation of some decision for an extended period of time, and doing detailed analyses from dubious technical premises followed by dismissal of the poorly founded but legible conclusions and settling the matter with an intuitive overall judgement that’s only very implicitly informed by that process.
But also, hard decisions matter less, the issue with conflationary alliances is more about the reigning norms being opposed to specificity, rather than about methodologies for making good decisions in particular cases. The methodological problem is about the effect of these norms, rather than about the effect (on the decisions) of the dynamics that feed the norms. The dynamics that feed the attractor norms aren’t necessarily directly anti-epistemic at all, a snap decision not informed by throwaway detailed analyses isn’t necessarily meaningfully worse than going through the exercise of a detailed analysis, because most decisions that matter won’t be that hard anyway, it would be possible to get good answers with less reasoning. The problem is the externalities of feeding the norms that eventually take over the efficient snap decisions and break their sanity, making them systematically wrong even for the easy decisions. (And of course the technical exercises also have the positive externalities of informing any future snap judgements, and feed the norms of finding occasions to do more exercises.)
Couldn’t prediction markets solve this? Make one for decisions by judges asking whether you’d agree with them. After some time, randomly choose to investigate one such market and resolve it. Perhaps make pay for judges conditional on the market predicting high agreement w/ your judgement. Of course, there’s liquidity issues, but: 1) these events are “hot” and would attract lots of betters, 2) you probably don’t need sub 5% accuracy here, 3) don’t make resolution more than like a year out or so out.
Disclaimer: Note that my analysis is based on reading only very few comments of Said (<15).
To me it seems the “sneering model” isn’t quite right. I think often what Said is doing seems to be:
Analyze a text for flaws.
Point out the flaws.
Derive from the demonstrated flaws some claim that shows Said’s superiority.
One of the main problems seems to be that in 1. any flaw is a valid target. It does not need to be important or load bearing to the points made in the text.
It’s like somebody building a rocket shooting it to the moon and Said complaining that the rocket looks pathetic. It should have been painted red! And he is right about it. It does look terrible and would look much better painted red. But that’s sort of… not that important.
Said correctly finds flaws and nags about them. And these flaws actually exist. But talking about these flaws is often not that useful.
I expect that what Said is doing is to just nag on all the flaws he finds immediately. These will often be the non important flaws. But if there are actually important flaws that are easy to find, and are therefore the first thing he finds, then he will point out these. This then can be very useful! How useful Said’s comments are depends on how easy it is to find flaws that are useful to discuss VS flaws that are not useful to discuss.
Also: Derivations of new flaws (3.) might be much shakier and often not correct. Though I have literally only one example of this so this might not be a general pattern.
Said seems to be a destroyer of the falsehoods that are easiest to identify as such.
Please could you write a policy regarding what obligations/duties/commitments/responsibilities people DO have by contributing LessWrong, regarding responding to comments? This could be a top-level post similar to Policy for LLM Writing on LessWrong.
After reading Banning Said Achmiz..., and associated comments, I thought that I understood LessWrong policy. However, the next thing I noticed on this topic was Sabien’s Obligated to Respond, which was then curated. After reading this and associated comments, I am no longer confident. In any case I don’t really want to read Banning Said Achmiz every time this topic arises. So I request a policy post with more clarity, less drama, and fewer words.
My suggested policy is something like:
LessWrong authors do not have a duty to respond to comments and questions. By posting a top-level essay or quick take, authors do not commit to answer questions, respond to criticism, or otherwise engage with commenters.
In the same way, by posting a comment, commenters are not obligated to continue to participate in that conversation.
Do not demand responses to comments, or criticize someone for not responding. If you think a comment is important and that a response would be valuable, you could vote it up.
Of course this doesn’t mean that there are no consequences in choosing not to respond, that you will never feel pressured to respond, or that people in the audience won’t be swayed by unanswered comments. However, LessWrong admins and moderators do not support these dynamics and will work to reduce them.
An example of a different policy a site might have is:
When posting a top-level essay or quick-take, you are inviting comments, including questions and critiques. Please budget some time to respond to a selection of comments. If you will be too busy to respond, please note this at the end of your essay so readers know what to expect.
In the same way, by posting a comment, you are inviting replies, and especially replies by the author. Please do not post comments if you do not want the author to respond. If you have a minor comment or question that does not warrant an author response, please keep it to yourself.
I think that would be worse, but I would still appreciate the clarity. Or a hybrid policy could be maximally top-level-author-friendly:
LessWrong authors do not have a duty to respond to comments and questions. By posting a top-level essay or quick take, authors do not commit to answer questions, respond to criticism, or otherwise engage with commenters.
However, by posting a comment to a top-level essay, you are still implicitly demanding a response from the author. The author may feel pressured or obligated to respond, driving them away from LessWrong. If you have a minor comment or question that does not warrant an author response, please keep it to yourself.
As it stands I have a few ideas for top-level essays and I am unsure what exactly I would be signing up for in terms of reader-interaction. Conversely, if every comment is implicitly demanding an author response, I will make dramatically fewer comments, possibly none.
I think it’s a bit weird since the obligation isn’t really something we could authoritatively determine using site policy, but I agree that clarifying our best guess of the prevailing norms more would be good.
FWIW, I think the policy I would choose is something like:
There are more thoughts I have on this, but figured I would leave this short comment with some initial thoughts.
Thanks for replying. I would prefer the policy you describe to the status quo of people having different ideas what the norms are. Perhaps this would be combined with a policy statement on “Do not try to win arguments by fights of attrition”.
I don’t think it’s a weird subject to have a policy on. Thinking of the Policy on LLM Writing:
The policy states what obligations people have to LessWrong itself. These obligations are notable for having some moral and legal force, and having moderator enforcement.
Of course any random person may think I have an obligation to do more, or less. But that has no moral force.
In the absence of a policy, we get debates as on Deontic Explorations in Paying to Talk to Slaves about (in part) whether certain content is acceptable on LessWrong. After the policy, there is an objective answer to that question, and fewer debates.
I think a policy on responding to comments would be similarly helpful. For example, as I read through the section “But why ban someone, can’t people just ignore Said?” above, it only really works as a debate in the absence of a site policy. Achmiz says:
That line of argument doesn’t work if there is a site policy that authors are not expected to respond to comments. Firstly, the attack itself is subject to moderation. Secondly, anyone, not just the author, can defuse it by linking to the site policy, which conveniently has a space where the policy can be discussed. Certainly site policy can’t stop Achmiz thinking I’m ignorant. But it can reduce the extent to which Achmiz can convince the rest of the audience that I’m ignorant.
LessWrong/Lightcone doesn’t have to weakly clarify its best guess of the prevailing norms. It can state what the norms are, in a self-fulfilling statement that sets the norms to what it states. As long as the stated norms are broadly popular, this just works.
Moving this top-level question by @Sting to this comment thread:
Which top authors did Said Achmiz drive away?
Habryka recently decided to ban Said Achmiz. He wrote an extensive post explaining the decision. There were some very good things about this decision at the meta level, such as having one person make the decision and take full responsibility for it, explaining the reasoning in detail, and giving Said a comment thread under which he can respond.
However, I did not find the specific examples given for the ban persuasive. E.g., the example given under
did not seem remotely ban-worthy to me.
The key claim which, if true, would justify a ban, was:
If Said is driving away many top authors, then he is at the very least guilty of being a bad cultural fit. And if someone chooses to act in a way that imposes costs on the website, and those costs are greater than the benefits he provides, he has no right to complain when you ban him.
But the key piece of information missing from the post is: which top authors did Said drive away?
The only example I am aware of is Duncan, but he doesn’t count. Habryka explicitly said:
And even more strongly:
So which top authors left? Of course, anyone who permanently left the site would not have left any comments on the post. But a few top authors[1] did mention bad personal experiences with Said:
Matt Goldenberg (5,600 karma) found Said’s comments unpleasant.
philh (7,800 karma) had mixed experiences:
Gordon Seidoh Worley (10,000 karma) also describes a mix of good and bad experiences:
Several other top authors defended Said[2]. I was not able to find, in either the post or the comments, any firsthand or secondhand examples of top authors who left because of Said. This question has been asked before, by Said himself:
To which habryka responded (emphasis added):
To which Zack replied:
Scott Alexander, when asked about Said, said:
And Jacob Falkovich’s view of Said is positive.
The fact that two of the examples on the list were incorrect, throws the rest of the list into doubt.
So the question remains:
Which top authors cite Said as a reason they “do not want to post on the site, or comment here”?
Said is a popular author, with 17,000 karma. Someone whose comments and posts are generally well-received should not be banned lightly. But if someone compiles a list of authors[3] who left the site because of Said (with direct quotes to that effect), and if those authors collectively have more than 17,000 karma, then that is at the very least a strong argument for banning him.
I am not sure what constitutes a top author, but I will tentatively define it as someone with over 5000 karma.
Alexander Gietelink Oldenziel (5,800 karma) found his comments valuable.
Wei Dai (41,000 karma) will also miss him.
Richard_Kennaway (7,800 karma) is sad that Said was banned.
Alicorn (30,000 karma) is disappointed and dismayed by the ban:
Excluding Duncan
Responding here briefly, though I would really like to make people understand I am not going to generally respond to things like this, as I really don’t want to spend even more time on this and am far beyond my allotted 10 hours.
This totally misrepresents what I said! I even clarify directly in the comments on this post:
The thing I am saying here is that Said’s engagements with Duncan in that comment thread are not the cause of me banning him. It doesn’t say anything about Duncan’s complaints which long preceded that engagement!
Separately, I also clarify a bunch of times in this comment section that no author complaints were load-bearing for this banning decision. I would make the same decision even if no prominent authors had complained. I had much more than enough direct engagement with Said, and seen many more than enough comments of his on my own to understand the consequences of his commenting style first-hand. Author complaints are not a load-bearing part of any of this decision-making, that’s why it really isn’t emphasized much in the post above and why I instead give detailed models for 10,000+ words! I think it’s totally fine for someone to be uncompelled by that, but excluding datapoints about which author “counts” by your own lights, based on whether they played a role in the banning decision is confused, because no author complaints ended up load-bearing for the banning decision.
To me, by far the most compelling reason for the ban was:
I don’t want to take up a significant amount of your time, but can you at least answer the yes-or-no question, of whether you still stand by this claim (the bolded part)?
Sure, I definitely stand by it (though I will again reiterate for like the 15th time that the vast majority of complaints about other users are not made in public but are made privately either through Intercom or direct conversation in user interviews we conduct, though there are definitely some that are public, some of which I quote in the thread mentioned above).
I don’t think this is a fair accusation.
If that’s your position, fine, but it does not straightforwardly follow from what you wrote. You were responding to Alexander Gietelink Oldenziel’s comment:
Wei Dai then chimed in:
You responded to Alexander (emphasis added):
Based on both your comment and the context, it looked like you were referring to Duncan/Said interactions in general, not to a specific thread.
Your clarification does not appear anywhere under Alexander’s original top-level comment. The comments total over 70,000 words, so I do not think it is fair to accuse me of misrepresenting you because I missed a clarification elsewhere.
Fair enough. My true reason for not counting Duncan is that he appears to be an unusually sensitive individual, who often gets mad at people without good reason. I was quoting you to establish (as a non-controversial, “bipartisan” point) that Said’s interactions with Duncan were not ban-worthy.
Sure, I am not saying your misreading of what I intended to convey was totally unreasonable, but it definitely wasn’t accurate to what I meant to convey and things I said in other places. I didn’t mean to imply much of any malice in you doing so and am sorry if it came across that way!
I personally think what I wrote was reasonably clear, but communication is hard, especially in a sprawling comment thread like this. Seems like we mostly cleared it up (and I can edit the OP comment with any edits, or transfer ownership fully over to you, if you want to change what you wrote in response to that).
Edit: Maybe a misunderstanding in this and other threads is that somehow you expect most people who complain about Said did so after they had comment threads with Said? That’s definitely not the case! Most people who complain about Said never had a long back-and-forth with him, they formed their impressions from his engagements with other people. Most effects from Said are chilling effects, not something that you should have any expectations to chase back to a specific comment thread (as is the case with most cultural effects, as well as effects from moderation).
Yes, your reply makes your position clear. I don’t feel like taking the time to edit my comment, but thank you for offering to edit in any changes.
Also, you definitely have my sympathy for the amount of time you have burned on this! I would not want your job.
Note that this is more centrally an example of micro-informed reasoning about the role of punitive damages in civil law, not criminal law, as illustrated by this classic article making basically this argument about punitive damages.
The other complaint I had about that segment is that I do not believe microeconomics-informed reading of criminal punishment (as exemplified by Gary Becker’s work) has held up well.
I think it’s often given as an example of where microeconomics-informed reasoning has led policymakers astray (as criminals are often bad at expected value calculations, even intuitively), and certainty of punishment >> expected cost of punishment. I don’t have a direct source for this but I think it’s a common position among economists.
Criminal justice is not criminal law! I think it’s normal to refer to part of civil litigation under the broader umbrella of “criminal justice” but maybe I am wrong? Like, I guess I have never heard the term “civil justice” used instead, and I don’t know of a better term that clearly spans both.
Just realized I never responded to this—I would just use the term “civil law” (as I did). For a term that covers both, “the legal system” perhaps, altho it’s a bit too broad, and you’re right that there’s not a great option.
Fwiw, “civil justice” is in fact used to refer to (many?) parts of the (US?) justice system where people entities sue each other. It’s true the phrase is less common (in part because criminal justice reform is a hot button issue) and I’m not surprised you haven’t heard it.
Here’s an example of a reputable seeming organisation using it this way. https://instituteforlegalreform.com/blog/what-is-civil-justice-and-why-is-it-important/
Huh, OK, I am convinced. Do you know of any umbrella term that would meaningfully cover both? “Legal justice”? Not sure whether I ever heard that one before, but maybe it’s real.
I think depends on context and what you’re trying to use the term for. In the original sentence that was quoted, I think “the justice system” or “law enforcement” would have worked though colloquially, some people would have misunderstood both as just implying the criminal side.
I think “the legal system,” “legal conflict,” or just “the law” also would’ve basically worked though each might have a slightly different connotation.
I would not use the term “criminal justice” to describe civil law, since civil law deals with civil wrongs rather than crimes.
Relevant evidence from the Wikipedia page on Criminal justice:
Civil law lacks parts 1 and 3.
Violating a civil court order can result in prison time, and if you get convicted in civil court, the police will enforce that judgement, so seems like it has all three?
I agree one could maybe make some argument that it’s not “criminal justice” until you “commit a crime by violating a civil court order”, but that seems confused to me, especially when thinking about things like enforcement and discovery dynamics. The civil courts power is directly downstream of its ability to enforce its judgements using criminal punishments.
Also, separately, that article sure is very America-centric. For example Germany has one set of courts for both criminal and civil cases, IIRC.
I can’t comment on how things work in Germany, since they have a very different structure of law (that my guess is English-language terms are not well-designed for), but:
This is what I think—in particular, the “criminal justice system” is the system that involves dealing with crimes, and the “civil law system” is the system that involves dealing with civil wrongs. You’re correct that they relate, but there are enough distinctions (who brings cases, proof standards, typical punishments, source of the laws) that I think it makes sense to distinguish them. I further think that most people with enough context to know the difference between civil and criminal law would not guess that a similarly informed person would use the term “criminal justice system” to cover civil law.
Having spent two years in law school, I feel pretty confident that Daniel’s right about this.
Kudos for this heading. A passing pun on someone’s name is a great way of poking fun & mildly insulting them (warranted in this case). I am reminded of a paper critiquing one by QM physicist Henry Stapp, entitled “A Stapp in the wrong direction”.
I don’t know if you know this, but if you encourage this “correctly” (something that I suspect literally no one knows how to do but which we can aim for) it also helps you in that no one can accuse the team of being secretly fractious (since it would be public).
Is this decision generally considered final and not subject to appeal, or do you expect comments on here/arguments by Said/etc to affect the final outcome you decide on?
He says, under the section titled “So what options do I have if I disagree with this decision?”:
Yep, the overall decision is very unlikely to change. It’s still a good place to express your disagreement, and especially so if you disagree with the more generalized case-law I tried to abstract away from this case.
Some days it’s hard to not start rooting for the paperclip maximizers.
Some days I actually do start rooting for the paperclip maximizers, but so far I’ve returned to not rooting for them in an hour or a day or two.
I’ve been chewing on the contents of this post for a week+ now.
I think the decision behind this post lurched my set point permanently towards, but not totally in, “root for the paperclip maximizers”, assuming habryka isn’t overridden or removed for this.
When a site that’s supposed to be humanity at its most rational removes one of its backstops against unimpeded woomongering in an attempt to get back authors who honestly seem happier and better-compensated writing on their Substacks, I’m tempted to cancel my pre-order of IABIED and shelve that one post that’s been rattling around in my head that amounts to “Given that CCP cooperation is essential for notkilleveryoneism to win, have any of you Bay Areans really thought about how an NGO push in the PRC is going to look to them, in light of all the other NGO/quango pushes that the US has been pushing that the CCP actively defends against because they obviously are bad for the CCP and/or PRC as a whole?”.
I change my mind too frequently on the paperclip-maximizer question to deactivate my account or let the domain registration for https://www.threemonkeymind.com/ lapse, but I’m updating strongly towards LW not being a place where I want to help raise the local sanity waterline, since this sort of work is actively being thwarted by the moderation team.
I don’t really think that these things have anything to do with each other (whether humanity should flourish vs being killed and turned into paperclips, and whether Said should be banned from LessWrong).
I also reject your characterization that we’re intentionally sacrificing epistemic standards to get back good authors. My story is that this is much more to do with my opinion that Said makes lots of demands of authors to explain things to his satisfaction, and yet often cannot seem (to me) to accept basic explanations of things either out of being dense or because of a commitment to not changing his mind (which he has endorsed sometimes doing for periods of at least a few days here).
Older version of this comment kept for posterity:
A commenter that is epistemically committed to not changing his mind in the face of evidence and argument, and who demands people explain things to his satisfaction over and over and over again at the risk of being labeled ignorant and laughable, being banned from this webforum, has little-to-nothing to do with whether it’s better for humanity to have a flourishing future or instead to be turned into paperclips. Insofar as you’re genuinely unsure about whether you like paperclips a lot, perhaps buy some and find out? I mean I don’t actually think you might like paperclips that much, I think that you’re trying to say something about losing faith in LW but instead said something that doesn’t make sense.most of what i want to say is about The Sneer Attractor and The Niceness Attractor, and unrelated to Said. is there some canonical post on that? i think this part of the post should have been separate post, that allow discussion of that.
***
This is Bad:
Elizabeth said roughly “if you don’t change your behavior in some way you’ll be banned”
He did not change his behavior, we did not end up banning him at this time, and he also did not stop participating on LW.
as in, it make things LW stuff say untrustworthy, and LW team should not do that. empty threats are bad, and they especially bad in place that try to develop and act on some version of Decision Theory.
***
also, all those hours and all this time look like waste to me. aka—in my model, most of the time invested in here didn’t generate original thoughts and was just running the same things on repeat. so if i was part of the LW team, i was asking myself “How could I have thought that faster?” or rather, how could i came to the same decision, while skipping the useless waste.
***
there is an option that look obvious to me looking at this situation. when someone provide both important positive value on axis x and important negative on y, then worries about glossing the important x can and should be alleviated bu trying to acquire x in some other way, without the y downsides.
in my model, Said provided both positive x and negative y in his writing here. (and i don’t actually count the ReadTheQequences and GreaterWrong, as i don’t use them. GW is probably slightly net-negative to me) in his best, what he provide is not just avoiding the Niceness Attractor, but reminding of important rational principals, and i did gain a lot from his comments. i don’t actually know who to get this positive x.
while summing all, i pretty sure it would have been better to ban his sooner, but i also gained a lot from reading his writing, and will be happy to read him in other places. it’s important to me, while summing this all, to see both the positive contributions and the negative. they both real, and don’t cancel each other. i think it was the right decision, in the end. but it’s sad we don’t have the option to have only the e good without the bad. it’s a really good good, and i expect to encounter illuminating comments from Said in the future, as i read old posts. i hope to see him in other places.