Can you give some examples of such people? (Are you one of them?)
My guess is something like more than half of the authors to this site who have posted more than 10 posts that you commented on, about you, in particular. Eliezer, Scott Alexander, Jacob Falkovich, Elizabeth Van Nostrand, me, dozens of others. This is not a rare position. I would have to dig to give you an exact list, but the list is not short, and it includes large fractions of almost everyone who one might consider strong contributors to the site.
We have had this conversation many times. I have listed examples of people like this in the past. If you find yourself still incapable of modeling more than 50% of top authors on the site whose very moderation guidelines you are opining on, after many many many dozens of hours of conversation on the topic, maybe you should just stay out of these conversations, as you are clearly incapable of modeling the preferences of the majority of people who would be affected by your suggested changes to the moderation guidelines.
A good start, if you actually wanted to understand any of this at all, would be to stop strawmaning these people repeatedly by inserting random ellipses and question marks and random snide remarks implying the absurdity of their position. Yes, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker! Yes, it is of course completely possible to be hostile in a plausible deniable way. Indeed, the most foundational essay for the moderation guidelines on this site, mentions this directly (emphasis mine):
Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Well-kept gardens do not tend to die by accepting obviously norm-violating content. They usually die by people being bad discourse participants in plausible deniable ways, just kind of worse, but not obviously and unambiguously worse, than what has come before. This is moderation 101. Yes, of course authors, and everyone else, will leave, if you fill a space with people just kind of being bad discourse participants, even if they don’t do anything egregious. How could reality work any other way.
My guess is something like more than half of the authors to this site who have posted more than 10 posts that you commented on, about you, in particular. Eliezer, Scott Alexander, Jacob Falkovich, Elizabeth Van Nostrand, me, dozens of others.
You are making false claims. Two of these claims about the views of specific individuals are clearly contradicted by those individuals’ own statements, as I exhibit below.
I reached out to Scott Alexander via Discord on 11 July 2025 to ask if he had “any specific feelings about Said Achmiz and whether he should be allowed to post on Less Wrong”. Alexander issued this statement:
I have no direct opinion on him. I have heard his name as someone who’s very confrontational, and I agree that this can make a website less pleasant, but I can’t remember having any personal experience.
Separately, as I mentioned to you in our meeting of 26 June 2025, in a public comment of 9 October 2018, Jacob Falkovich wrote (bolding added):
Said, I have seen a lot of your comments on LW, on my posts and the posts of others. They are, by my standards, high on criticism and low on niceness. I personally formed an impression of you as disagreeable. Even though I have argued myself that LW should optimize for honesty over niceness, still the impression of you disagreeableness was colored negatively in my mind.
But now that you’ve stated that you’re disagreeable on purpose, the negative effect flipped entirely to become positive. Instead of you being disagreeable by accident, it’s intentional. I like diversity, and I support people who are on a mission to bring a new flavor to the community. Knowing this also makes it easier to take criticism from you—it’s not that you hate me or what I write, it’s just that you don’t care if someone thinks you hate them and their writing. The Bayesian update in the two cases is very different!
Thanks for the follow-up! I talked with Scott about LW moderation a long time ago (my guess is around 2019) and Said’s name came up then. My guess is he doesn’t remember. It wasn’t an incredibly intense mention, but we were talking about what makes LW comment sections good or bad, and he was a commenter we discussed in that conversation in 2019 or so.
I think you can clearly see how the Jacob Falkovich one is complicated. He basically says “I used to be frustrated by you, but this thing made that a lot better”. I don’t remember the exact time I talked to Jacob about it, but it had come up sometime some context where we discussed LW comment sections. It’s plausible to me it was before he made this comment, though it would be a bit surprising to me, since that’s pretty early into LW’s history.
Almost every single author on this site who has posted more than 10 posts, about you, in particular
Roll to disbelieve.
incapable of modeling the preferences
I share something like Achmiz’s incredulity, but for me, I wouldn’t call it an inability to model preferences so much as disapproval of how uninterested people are in arguing that their preferences are legitimate and should be respected by adults who care about advancing the art of human rationality.
Achmiz has argued quite eloquently at length for why his commenting style is conducive to intellectual progress. If someone disagrees with that case on the intellectual merits, that would be interesting. But most of the opposition I see seems to appeal not to the intellectual merits, but to feelings: that Achmiz’s comments make authors feel bad (in some way that can’t be attributed to a breach of etiquette rules that could be neutrally enforced), which makes them not want to use the website, and we want people to use the website.
I’m appalled that the mod team apparently takes this seriously. I mean, okay, I grant that you want people to use the website. If almost everyone who might use your website is actually that depraved (which sounds outlandish to me, but you’re the one who’s done dozens of user interviews and would know), I guess you need to accommodate their mental illness somehow for pragmatic reasons. But normatively (dealing with the intellectual merits and not feelings), you see how the problem is with everyone else, not Achmiz, right?
About what? Well specifically the last paragraph. But also I think we fundamentally disagree on what the gradient toward better rationality looks like. As in, what kind of norms should be promoted and selected for.
My view is something like: it’s very important to have a good model for how emotions (like annoyance, appreciation, liking, disliking, that kind of thing) work, and one should take on significant efforts to optimize communication for that, both in personal writing style and with respect to how a community is moderated.
I think your view is probably something like: this is an atrocious idea, WTAF, we should instead try to get away from focusing on feelings since they are the noise rather than the signal, and should judge everything on intellectual merit insofar as that is possible. (Plus a whole bunch of nuance, we probably don’t want to do things that intentionally make other people angry, maybe a bit of hedging is appropriate, maybe taking our own emotions into account to the extent that we can correct for them is good, etc. Idk how exactly you feel about e.g. Leaving a line of Retreat)
Assuming this is roughly correct, I’d first want to pose a hypothetical. Suppose it were in fact the case that my vision of rationality works better, in the sense that communities which are built around the kind of culture I’m envisioning lead to better outcomes. (Not better outcomes as in “people are more coddled and feel better” but in terms of epistemic performance, however that would be measured.) Would this actually be a crux?
I’m starting with this because I’m noticing that your last paragraph-
I’m appalled that the mod team apparently takes this seriously. I mean, okay, I grant that you want people to use the website. If almost everyone who might use your website is actually that depraved (which sounds outlandish to me, but you’re the one who’s done dozens of user interviews and would know), I guess you need to accommodate their mental illness somehow for pragmatic reasons. But normatively (dealing with the intellectual merits and not feelings), you see how the problem is with everyone else, not Achmiz, right?
-does not actually ask this question, but just takes it for granted that not coddling is better. So if coddling were in fact better, would this actually make a difference, or would you still just reject the approach?
Assuming it is a crux, the second thing I’d ask is, why are you confident in this? Wouldn’t your experience suggest that most people on this site aren’t particularly good at this rationality thing? Why did Eliezer get the trans thing wrong? Why did not everyone agree with you immediately when you tried to fix it? (I hope this doesn’t open a huge rabbit hole.)
My estimate would be something like, there are around 2-4 people on this site who are actually capable of the kind of rationality that you’re envisioning, and since one of them is you, there’s like 1-3 others. The median LW person—and not just the median but up to the 80th percentile, at least—is strongly influenced by style/vibes/kindness/fluff/coddling in what they engage with, how long they continue engaging with it, and in how much they update their beliefs. This view seems to me to be very compatible with the drama around Said, everything that happened to you, with which posts are well-received, and really everything I can observe on this site. (I don’t think it’s incompatible with the community’s achievements or the amount of mind-changing that does take place.)
And even if there are more people who are capable of not letting vibes affect their judgment/beliefs/etc (though I’m not conceding that there are), it would still take significantly more effort, and effort is absolutely an important bottleneck. It is important (importantly bad) if something zaps people’s energy. Energy (in the sense of willpower/motivation/etc.) is the relevant currency for getting stuff done, for most people.
Since I think you think that my vision would be terrible if it were realized, one point I want to make is that being nice/considerate/coddling does not actually require you to lie, at all. I know this because I tend to try much harder than most to not make someone feel bad (I think), and I can do it without lying. I was kind of giggling when thinking about how to do that in this comment because in some sense, trying to be nice to you is insulting (because it implies that I don’t respect your ability to be unaffected by vibes). But I decided to do it anyway just because then I can use it an illustration of the kinds of things my model entails. So, here’s an incomplete list of things I’ve done in this comment to make it feel nicer.
Listing my view first rather than yours first (bc the natural flow was list one position and then open the second one with how much it disagrees with the first position—so the default version would have been “here’s what I think you believe, but I think that would actually be very bad because xyz”, but by flipping around I get to trash talk my view rather than yours)
Using derogatory language for my position (“coddling”)
Including a compliment
Lots of other details about how I write to make it sound less arrogant, which have become even more automatic at this point than the stuff above and it’d actually take significant effort to not do them. (Using ! rather than . for some sentences is an example, it tends to be status-lowering.)
This is all I think pretty typical stuff that I do all the time when I communicate with people via text on important things (usually without calling attention to it). I used to not do any of it, and in my experience, my current style of communicating works vastly better. (It works tremendously better outside LW, but I still think it works significantly better on LW as well.) And it didn’t require me to lie, or even water down my argument. A nice feature about how most humans work is that their emotions are actually determined more by platitudes and status comparisons than your actual position, which means you can usually even tell them that you think they’re completely wrong without making them feel bad, if you package it right. In fact, I believe that the kind of norms you’re envisioning would be a disaster if they were enforced by the mod team, but given how I’ve written my remaining comment, I think I could get away with saying this even if I were talking to the median LW user, without making them feel animosity toward me.
(I realize that I’ve just been talking about 1-1 interactions but this is a public forum, will get to that now.)
So when having a model like the one I’ve sketched out, the idea that we should step in if a user makes other users uncomfortable seems completely reasonable on first glance. (Like, most people aren’t in fact that good at rationality, it’ll make them annoyed, less rational, zap their energy, seems like a clear net negative.) Now Said said here that the value of his comments isn’t about what the author feels like, it’s about the impact on the whole forum. Very good point, but...
… these things aren’t actually separate. It’s not like vibes exist independently between any two people in a discussion. They are mostly independent for each top-level comment thread. But if A makes a post, B leaves a top-level comment that’s snarky and will hurt the feelings of A, then I’m not gonna go in there and to talk to B as if A didn’t exist. I know (or at least am always assuming) that A is present in the conversation whether they reply or not because it’s their post (and I know I care a ridiculous amount about comments on (most of) my posts). This completely colors the subsequent conversation.
As for Said specifically, I have no memories of being upset about his comments on my posts (it’s possible it happened and I forgot), but I have many (non-specific) memories of seeing his comments on different posts and being like, “ohh no this is not going to be helpful :(” even though iirc I agree with him more often than not. My brutally honest estimate as for the total impact of these comments is that it lands below neutral. I’m not super confident in this—but I’m very confident that the net impact would be a lot more positive if he articulated identical points in a different style. The claim that a lot of people had issues with him strikes me as plausible. As I said, I think there’s just not much of a tradeoff here. I mean there’s a tradeoff for the commenter since it takes effort to be nice. But there’s not much of a tradeoff for the product (the comment). Maybe it’ll be longer, but, I mean.
Counterpoint: I’m much more vibe-sensitive than the median LW user, so even if most people’s rationality will be damaged by having an unfriendly comment directed at them, maybe most of them won’t care if they just see B being unfriendly to A. My response: definitely directionally true; this is why I’m not confident that Said’s comments are a net negative. Maybe they’re a net positive because of effect on other people.
Another counterpoint: maybe B being rude to A colors the vibe initially but not if it spawns a huge comment thread between D and E about something only vaguely related to the original post; and that point it doesn’t matter whether B was nice to A (but B made it happen with their initial response). My response: also true, still not enough to overturn my conclusion.
(More just explaining my model.)
I don’t think there is altogether much evidence that the instrumental rationality part of the sequences is effective. (Like How To Actually Change Your Mind.) I completely grant that LW is vastly better than the rest of the internet at people changing their mind, but that can be equally explained by people who are already much better at changing their mind being drawn into one community.
One reason is that LW still sucks at this, even if the rest of the internet sucks way more. But the more important reason is that if you observe how mind change happens when it does happen, well it rarely looks like someone applying a rationality technique from the sequence—and when it does look like that, it’s probably either a topic that the person wasn’t that invested in in the first place, or the person is you.
I think the overarching problem here is that Eliezer didn’t have a good model of how the brain works, and LW still doesn’t have it today, and because of that, rationality techniques as taught in the sequences are just not going to be a very effective; you’re not going to be good at manipulating a system if all your models for how the system works are terrible. (Ironically beliefs about how the brain works a prime example of the category of belief that is now very sticky and almost impossible to change with those tools.) There was a tweet, I don’t have a link anymore, where someone said that the main thing people got out of the sequences was just this vibe that a lot more was possible. I think this is true, the discussion on LW about it that I remember seemed to take it seriously, but like, what a gigantic indictment of the entire project! His understanding of the brain sucked so bad that his entire collection of plans operating in his framework was less effective than a single out-of-model effect that he didn’t understand or optimize for! If this is even slightly true, it clearly means that we should care a hell of a lot more about vibes than we currently do, not less! (Though, obligatory disclaimer that even if the sequences functioned 100% only as a community-building tool, which is a more extreme claim than what I think is true, they would probably still have been worth it.)
In case it wasn’t clear, I think all the caring about vibes is entirely justified as an instrumental reason. I do think it’s also terminally good if people feel better, but I think everything I said holds if we assign that 0 weight.
I agree that it’s important to optimize our vibes. They aren’t just noise to be ignored. However, I don’t think they exist on a simple spectrum from nice/considerate/coddling to mean/callous/stringent. Different vibes are appropriate to different contexts. They don’t only affect people’s energy but also signal what we value. Ideally, they would zap energy from people who oppose our values while providing more energy to those who share our values.
Case in point, I was annoyed by how long and rambly your comment was and how it required a lot of extra effort to distill a clear thesis from it. I’m glad you actually did have a clear thesis, but writing like that probably differentially energizes people who don’t care.
Thanks for this interesting comment!—and for your patience. I really appreciate it.
one point I want to make is that being nice/considerate/coddling does not actually require you to lie, at all
I absolutely agree with that statement; the problem is that I think not-lying turns out to be a surprisingly low standard in practice. Politicians and used car salesmen are very skilled at achieving their desired changes on people’s beliefs and behavior without lying, by listing a bunch of true positive-vibe facts about the car and directing attention away from the algorithm they’re using to decide what not to say—or what evidence not to look for, prior to even saying anything.
The most valuable part of the Sequences was the articulation of a higher standard than merely not-lying—not just that the words you say are true, but that they’re the output of a search process that would have returned a different answer if reality were different. That’s why a key thing I aspire to do with my writing is to reveal (a cleaned-up refinement of) my thought process, not just the conclusion I ended up at. On the occasions when I’m trying to sell my readers a car, I want them to know that, so that they know that they need to read other authors to learn about reasons to not buy the car (which I haven’t bothered to come up with). The question to be asking is not, “Is this lying?—if not, it’s permissible”, but, “Is this maximally clear?—if not, maybe I can do better.”
All this to say that I’m averse to overtly optimizing the vibes to be more persuasive, because I don’t want to persuade people by means of the vibes. That doesn’t count! The goal is to articulate reasoning that gets the right answer for the right reasons, not to compute actions to cause people to agree with what I currently think is the right answer.
But you know all that already. I think you’re trying to advocate not so much for making the vibes persuasive, but for making sure the vibes aren’t themselves anti-persuasive in a way that prevents people from looking at the reasoning. I think I’m in favor of this! That’s why I’m so obsessed with telling abstract parables with “timeless” vibes—talk about bleggs and rubes, talk about the Blue and Green teams, talk about Python programs that accept each other’s outputs as inputs—talk about anything but real-world object-level disputes that motivate seeking recourse in philosophy, which would be distracting. (I should mention that this technique has the potential failure mode of obfuscating object-level details that are genuinely relevant, but I’m much less worried about that mattering in practice than some of my critics.)
But that kind of “avoid unnecessarily anti-persuasive vibes” just doesn’t seem to be what’s at issue in these repeated moderation blow-ups?
Commenters pointed out errors in my most recent post. They weren’t overtly insulting; they just said that my claim was wrong because this-and-such. I tried to fix it, but still didn’t get it right. (Embarrassing!) I didn’t take it personally. (The commenters are right and my post as written is wrong.) I think there’s something pathological about a standard that would have blamed the commenters for not being nice enough if I had taken it personally, because if I were the type to take it personally, them being nicer wouldn’t have helped.
Crucially, I don’t think this is this a result of me having genetically rare superhuman rationality powers. I think my behavior was pretty normal for the subject matter: you see, it happened to be a post about mathematics, and the culture of mathematics is good at training people to not take it personally when someone says “Your example doesn’t work because this-and-such.” If I’m unusually skilled at this among users of this website, I think that speaks more to this website being a garbage dump rather than me being great. (I think I want to write a top-level post about this aspect of math culture.)
Using derogatory language for my position (“coddling”)
Sneaky! (I’m embarrassed that I didn’t pick up on this being a deliberate conciliatory tactic until you flagged it.)
Only under a pretty generous interpretation of knowing. I certainly didn’t have a good model for this standard of communication when I wrote my comment, which I agree is much higher than just not lying. (And I’ve been too lazy to read your posts on this in the past, even though I’ve seen them a few times.)
But, I think caring for vibes is compatible with this standard as well. The set of tools you have to change vibes is pretty large, and imE it’s almost always possible to adjust them not just without lying, but while still explaining the actual reasons for why you believe the thing you’re arguing for.
But that kind of “avoid unnecessarily anti-persuasive vibes” just doesn’t seem to be what’s at issue in these repeated moderation blow-ups?
I do think that’s the issue.
So, this is the comment that was causally upstream of all the recent discussion under this post here.
The vibes of this comment are, imo, very bad, and I think that’s the reason why Gorden complained about it. Four people voted it as too combative (one of them being Gorden himself).
habryka said that Said triggers a sizeable part of all complaints on this site, so I guess there’s not really a way to talk about this in non-specific terms, so I’ll just say that I think this is a very central example, and most other cases of where people complain about Said are like this as well.
Could one have written a comment that achieves the same things but has better vibes? In my opinion, absofuckinglutely! I could easily write such a comment! (If that’s a crux, I’m happy to do it.) I have many disagreements with Said (as demonstrated in the other comment thread), but maybe the biggest one is that changing presentation is changing content. Sure that’s literally true in a very narrow sense, but I think practically it’s just completely wrong. (I mean now I’m just repeating my claim from the second paragraph.)
(I agree that the religion post had issues and imo Said pointed out one of them. Conversely, I saw the post, figured I’d disagree with it, and deliberately declined to read and write a response, as I often do. Which is to say, I agree that there was some value to Said writing it, whether it’s a net positive or not.)
Commenters pointed out errors in my most recent post. They weren’t overtly insulting; they just said that my claim was wrong because this-and-such. I tried to fix it, but still didn’t get it right. (Embarrassing!) I didn’t take it personally. (The commenters are right and my post as written is wrong.) I think there’s something pathological about a standard that would have blamed the commenters for not being nice enough if I had taken it personally, because if I were the type to take it personally, them being nicer wouldn’t have helped.
Right, but this example looks highly dissimilar to me. Gurkenglas was being very brief/minimalistic, which could be considered a little rude, but the (a) context is completely different (this was a low-stakes situation in terms of emotional investment, what he said doesn’t invalidate the post at all, and he was correcting an objective error—all of this different from Gorden’s post), and (b) Said’s comment still has actively worse vibes. (And Gurkenglas’ comment seems to be the only one that could even be in considered rude; the other two people who commented were being actively nice.) So, I agree that any standard that would make these comments not okay would be extremely bad. I also agree that your reaction, while good, is not particularly special, in the sense that probably most people would have dealt with this just fine.
Could one have written a comment that achieves the same things but has better vibes? In my opinion, [absolutely]! I could easily write such a comment! (If that’s a crux, I’m happy to do it.)
I don’t think you can. The reason why the comment in question has aggressive vibes is because it’s clearly stating things that Worley predictably won’t want to hear. The way you write something that includes the same denotative claims with softer vibes is by means of obfuscation: adding a lot of puffy hedging veribage that makes it easier for a distracted or conflict-averse reader to skim over the comment’s literal words without noticing what a rebuke is intended. The obfuscated version only achieves the same things in the minds of sufficiently savvy readers who can reverse the vibe-softening distortion and infer the original intent.
Strong disagree. Said’s comment does several things that have almost no function except to make vibes worse, which means you can just take those out, which will make the comment shorter. I will in fact add in a little bit of hedging and it will still be shorter overall because the hedging will require fewer words than the unnecessary rudeness.
Here’s Said’s comment. Here’s a not-unnecessarily-rude-but-still-completely-candid-version-that’s-actually-166-characters-shorter-than-the-original-and-that-I-genuinely-think-achieves-the-same-thing-and-if-not-I’d-like-to-hear-why-not
I think it’s bad to participate in organized religion because it exposes you to intense social pressure to believe false things.
You can find religions you can practice without being asked to give up your honest search for truth with no need to even pretend to have already written the bottom line.
This may formally be true, i.e., you may not be officially asked to believe false things. But if your social context consists of people who all believe approximately the same false things, and the social context is organized around those beliefs, and the social context valorizes those beliefs, then the social pressure will be intense nonetheless. And some of these false beliefs are fairly subtle! (Eliezer discusses this in the sequences.)
I also got asked about how I feel about religions and truth seeking. My answer is that you shouldn’t think of religions as being about the truth as rationalists typically think of it because religions are doing something orthogonal.
...which I think is just an example of damage done by a religion. The claim that “you shouldn’t think of religions as being about the truth as rationalists typically think of it” seems like typical anti-epistemology.
Religion is about “the truth as rationalists typically think of it”. There is nothing but “the truth as rationalists typically think of it”, because there’s just “the truth”, and then there are things which aren’t truth claims at all, of any kind (like preferences, etc.). But get into religion, start relaxing your epistemic standards just a bit, and you can descend into this sort of nebulous “well there’s different things which are ‘true’ in different ways, and what even is ‘truth’, anyway”, etc. And then your ability to know what’s true and what’s false is gone, and nothing is left but “vibes”.
This takes it down from about an 8⁄10 rudeness to maybe a 4 or 5. Is anyone going to tell me that this is not sufficiently blunt or direct? Will non-savvy readers have to read between the lines to figure out that this is a rebuttal of the code idea? I think the answer is clearly no; if people see this comment, they will immediately view it as a rebuttal of the post’s thesis.
The original uses phrases like
And here we have a perfect example of the damage done by religion.
This is not any more direct than saying
I think this is just an example of the damage done by religion.
These two messages convey exactly the same information, the first just has an additional layer of derision/mockery which the second doesn’t. (And again, the second is shorter.) And I know you know this difference because you navigate it in your own writing, which is why I’m somewhat irritated that you’re talking as if Said’s comments were just innocently minimalistic/direct.
Thanks, that was better than most language-softening attempts I see, but …
These two messages convey exactly the same information
Similar information, but not “exactly” the same information. Deleting the “very harmful false things” parenthetical omits the claim that the falsehoods promulgated by organized religion are very harmful. (That’s significant because someone focused on harm rather than epistemics might be okay with picking up harmless false beliefs, but not very harmful false beliefs.) Changing “very quickly you descend” to “you can descend” alters the speed and certainty with which religious converts are claimed to descend into nebulous and vague anti-epistemology. (That’s significant, because a potential convert being warned that they could descend into anti-epistemology might think, “Well, I’ll be extra careful not to do that, then,” whereas a warning that one very quickly will descend is less casually brushed off.)
That’s what I meant by “obfuscation” in the grandparent: the softer vibes of no-assertion-of-harmfulness versus “very harmful false things”, and of “can descend” versus “very quickly descend”, stem from the altered meanings, not just from adjusting the vibes while keeping the meanings constant.
And I know you know this difference because you navigate it in your own writing, which is why I’m somewhat irritated that you’re talking as if Said’s comments were just innocently minimalistic/direct.
It’s not that I don’t know the difference; it’s that I think the difference is semantically significant. If I more often use softer vibes in my comments than Said, I think that’s probably because I’m a less judgemental person than him, as an enduring personality trait. That is, we write differently because we think differently. I don’t think website moderators should require commenters to convincingly pretend to have different personalities than they actually have. That seems like it could be really bad.
Okay—I agree that the overall meaning of the comment is altered. If you have a categorical rule of “I want my meaning to be only this and exactly this, and anything that changes it is disqualified” then, yes, your object is valid. So consider my updated position to be something like, “your standard (A) has no rational justification, and also (B) relies a false model of how people write comments.” I’ll first argue (A), then (B).
Similar information, but not “exactly” the same information. Deleting the “very harmful false things” parenthetical omits the claim that the falsehoods promulgated by organized religion are very harmful. (That’s significant because someone focused on harm rather than epistemics might be okay with picking up harmless false beliefs, but not very harmful false beliefs.) Changing “very quickly you descend” to “you can descend” alters the speed and certainty with which religious converts are claimed to descend into nebulous and vague anti-epistemology. (That’s significant, because a potential convert being warned that they could descend into anti-epistemology might think, “Well, I’ll be extra careful not to do that, then,” whereas a warning that one very quickly will descend is less casually brushed off.)
It is logically coherent to have the () reactions. But do you think it’s plausible? What would be your honest probability assessment that a religious person reads this and actually goes that route—as in, they accept the claims of the comment but take the outs you describe in () -- whereas if they had read Said’s original comment instead, they’d still accept the premises, and this time they’d be convinced?
Conversely, one could imagine that a religious person reads Said’s version and doesn’t engage with it because they feel offended, whereas they would have engaged with it, and that the same person would have engaged with my version. (Which, obviously, I’d argue is more likely.)
At this point, my mental model of you responds with something like
You’re probably correct on the consequential analysis (i.e., the softened version would be more likely to be persuasive)[1], but I don’t think it follows that we as a community should therefore moderate vibes because [very eloquently argued case about censorship being bad that I won’t try to replicate here]
To which I say, okay. Fine. I don’t think there is a slippery slope here, but I think arguing this is a losing battle. So I’ll stop with (A) here.
My case for (B) is that the algorithm which produced Said’s message didn’t take of these details into account, so changing them doesn’t censor or distort the intent behind the message. Said didn’t run an assessment of how harmful the consequences are exactly, determined that they’re most accurately described as “very harmful” rather than “harmful” or “extremely harmful”, and then posted it. Ditto with the other example.
I’m not sure if how much of any evidence I need here to make this point, but here are some ways in which you can see that the above is true
if you did consider the meaning to this level of detail, then you wouldn’t write “very quickly you descend” because well, you might not descend, it’s not 100%, so you’d have to qualify this somehow.[2]
Thinking this carefully about the content of your messages takes a lot of time. Said doesn’t take this much time for his comments, which is how he can respond so quickly.
If you thought about the actual merits of the proposal, then you’d scrap the entire second half of the comment, which is only tangentially relevant to the actual crux. You would be far more likely to point out that a good chunk of the post relies on this sentence
and to the extent anything that doesn’t consider itself a religion provides these, it’s because it’s imitating the package of things that makes something a religion.
… which is not justified in the post at all. This would be a vastly more useful critique!
So, you’re placing this extreme importance on the precise semantic meaning of Said’s comment, when the comment wasn’t that well thought-out in the first place. I’d be much more sympathetic to defending details of semantic meaning if those details had been carefully selected.
The thing that’s frustrating to me—not just this particular point in this conversation but the entire vibes debate—and which I should have probably pointed out much earlier—is that being more aware of vibes makes your messages less dependent on them, not more. Because noticing the influence allows you to adjust. If you realize a vibe is pushing you to write X, you can then be like, hold on that’s stupid, let me instead re-assess how whatever I’m responding to right now actually impacts the reasons why I believe the thing I believe. And then you’ll probably notice that what you’re pushed to write doesn’t really hit the crux at all and instead scrap it and write something else. (See the footnote[3] for examples in this category.)
To put it extremely bluntly, the thing that was actually causally upstream of the details in Said’s message was not a careful consideration of the factual details; it was that he thinks religion is dumb and bad, which influenced a parameter sent to the language-generation module that output the message, which made it choose language that sounded more harsh. This is why it says “perfect example” and not “example”, why the third paragraph sounds so dismissive, why the message contains no !s, why he said “very quickly you descend” rather than “you can descend”, and so on. The vibe isn’t an accidental by-product, it’s the optimization target! Which you can clearly observe by the changes I’ve pointed out here.
… and on a very high level, to just give a sense of my actual views on this, the whole thing just seems ridiculously backwards in the sense that it doesn’t engage with what our brains are actually doing. Like I think it happens to be the case that not listening to vibes is often better (although this is a murky distinction because a lot of good thought relies on what are essentially vibes as well—it’s ultimately a form of computation), but the broader point is that, whatever you want to improve, more awareness of what’s actually going going to be good. Knowledge is power and all that.
If you don’t think this, then that would be a crux, but also I’d be very surprised and, not sure how I’d continue the conversion then, but for now I’m not thinking too much about this.
Alright for example, the first thing I wrote when responding to your comment was about you quoting me saying “These two messages convey exactly the same information”. I actually meant to refer to the specific line I quoted only, where this statement was more defensible. But I asked myself, “does this actually matter for the crux” and the answer was no, so I scrapped it. The same thing is true for me quoting Gordon’s response and pointing out that it fits better with my model than yours, and a snide remark about how your () ascribes superhuman rationality powers to religious people in particular.
Now you may be like, well those are good things, but that’s different from vibes. But it’s not really, it’s the same skill of, notice what your brain is actually doing, and if it’s dumb, interfere and make it do something else. More introspection is good.
I guess the other difference is that I’m changing how I react here rather than how someone else reacts. I guess some people may view one as super good and the other as super bad (e.g., gwern’s comment gave off that vibe to me). To me these are both good for the same reason. Deliberately inserting unhelpful vibes into your comment is like uploading a post with formatting that you know will break the editor and then being like “well the editor only breaks because this part here is poorly programmed, if it were programmed better then it would do fine”. In any other context this would-pattern match to obviously foolish behavior. (“I don’t look before crossing the sidewalk because cars should stop.”) It’s only taken seriously because people are deluded about the degree to which vibes matter in practice.
Anyway, I think you get the point. In retrospect I should have probably structured a lot of my writing about this differently, but can’t do that now.
What would be your honest probability assessment that a religious person reads this and actually goes that route
Sorry, phrasing it in terms of “someone focused on harm”/”a potential convert being warned” might have been bad writing on my part, because what matters is the logical structure of the claim, not whether some particular target audience will be persuaded.
Suppose I were to say, “Drug addiction is bad because it destroys the addict’s physical health and ability to function in Society.” I like that sentence and think it is true. But the reason it’s a good sentence isn’t because I’m a consequentialist agent whose only goal is to minimize drug addiction, and I’ve computed that that’s the optimal sentence to persuade people to not take drugs. I’m not, and it isn’t. (An addict isn’t going to magically summon the will to quit as a result of reading that sentence, and someone considering taking drugs has already heard it and might feel offended.) Rather, it’s a good sentence because it clearly explains why I think drug addiction is bad, and it would be dishonest to try to persuade some particular target audience with a line of reasoning other than the one that persuades me.
Deliberately inserting unhelpful vibes into your comment is like uploading a post with formatting that you know will break the editor and then being like “well the editor only breaks because this part here is poorly programmed, if it were programmed better then it would do fine”. In any other context this would-pattern match to obviously foolish behavior. (“I don’t look before crossing the sidewalk because cars should stop.”)
I don’t think those are good metaphors, because the function of a markup language or traffic laws is very different from the function of blog comments. We want documents to conform to the spec of the markup language so that our browsers know how to render them. We want cars and pedestrians to follow the traffic law in order to avoid dangerous accidents. In these cases, coordination is paramount: we want everyone to follow the same right-of-way convention, rather than just going into the road whenever they individually feel like it.
In contrast, if everyone writes the blog comment they individually feel like writing, that seems good, because then everyone gets to read what everyone else individually felt like writing, rather than having to read something else, which would probably be less informative. We don’t need to coordinate the vibes. (We probably do want to coordinate the language; it would be confusing if you wrote your comments in English, but I wrote all my replies in French.)
the thing that was actually causally upstream of the details in Said’s message [...] was that he thinks religion is dumb and bad, which influenced a parameter sent to the language-generation module that output the message, which made it choose language that sounded more harsh. [...] The vibe isn’t an accidental by-product
Right, exactly. He thinks religion is dumb and bad, and he wrote a comment that expresses what he thinks, which ends up having harsh vibes. If the comment were edited to make the vibes less harsh, then it would be less clear exactly how dumb and bad the author thinks religion is. But it would be bad to make comments less clearly express the author’s thoughts, because the function of a comment is to express the author’s thoughts.
whatever you want to improve, more awareness of what’s actually going going to be good
Absolutely. For example, if everyone around me is obfuscating their actual thoughts because they’re trying to coordinate vibes, that distortion is definitely something I want to be tracking.
to just give a sense of my actual views on this, the whole thing just seems ridiculously backwards
what matters is the logical structure of the claim, not whether some particular target audience will be persuaded.
Right, exactly. He thinks religion is dumb and bad, and he wrote a comment that expresses what he thinks, which ends up having harsh vibes. If the comment were edited to make the vibes less harsh, then it would be less clear exactly how dumb and bad the author thinks religion is. But it would be bad to make comments less clearly express the author’s thoughts, because the function of a comment is to express the author’s thoughts.
Oh. Oh. So you agree with me that the details weren’t that well thought out (or at least didn’t bother arguing against it), and ditto about the net effects, but you don’t think it matters (or at any rate, isn’t the important point) because you’re not trying to optimize positive effects, but just honest communication...?
This is not what I thought your position was, but I guess it makes sense if I try to retroactively fit it. This means most (all?) of my objections don’t apply anymore. Like, yeah, if you terminally value authentically representing the author’s emotional state of mind, then of course deliberately adjusting vibes is a net negative for your values.
I don’t think those are good metaphors, because the function of a markup language or traffic laws is very different from the function of blog comments. We want documents to conform to the spec of the markup language so that our browsers know how to render them. We want cars and pedestrians to follow the traffic law in order to avoid dangerous accidents. In these cases, coordination is paramount: we want everyone to follow the same right-of-way convention, rather than just going into the road whenever they individually feel like it.
In contrast, if everyone writes the blog comment they individually feel like writing, that seems good, because then everyone gets to read what everyone else individually felt like writing, rather than having to read something else, which would probably be less informative. We don’t need to coordinate the vibes. (We probably do want to coordinate the language; it would be confusing if you wrote your comments in English, but I wrote all my replies in French.)
(I think this completely misses the point I was trying to make, which is that “I will do X which I know will have bad effects, but I’ll do it anyway because the reason it has bad effects is that other people are making mistakes, so it’s not me who should change X, but other people who should change” is recognized as dumb for almost all values of X, especially on LW—but I also think this doesn’t matter anymore, either, because the argument is again about consequences, which you just demoted as the optimization target. If you agree that it doesn’t matter anymore, then no need to discuss this more.)
I guess now I have a few questions
Why do you have this position? (i.e., that comments aren’t about impact). Is this supposed to be, like, the super obvious message that was clearly the main point of the sequences, or something like that?
Is your default model of LWians that most of them have this position?
You said earlier that the repeated moderation blow-ups aren’t about bad vibes. I feel like what you’ve said since justifies why you think Said’s comments are good, but not that they aren’t about vibes—like even with everything you said here, it still seems like the causal stream here is clearly bad vibes → people complain to harbyka → Said gets in trouble? (This isn’t super important, but still felt worth asking.)
Why do you have this position? (i.e., that comments aren’t about impact).
Because naïvely optimizing for impact requires concealing or distorting information that people could have used to make better (more impactful) decisions in ways that can’t realistically be anticipated by writers naïvely optimizing for impact.
Here’s an example from Ben Hoffman’s “The Humility Argument for Honesty”. Suppose my neck hurts (coincidentally, after trying a new workout routine), and after some internet research, I decide I have neck cancer. The impact-oriented approach would call for me to do my best to convince my doctor I have neck cancer, to make sure that I get the chemotherapy I’m sure I need. The honesty-oriented approach would call for me to explain to my doctor the evidence and reasoning for why I think I have neck cancer.
Maybe there’s something to be said for the impact-oriented approach if my self-diagnoses are never wrong. But if there’s a chance I could be wrong, the honesty-oriented approach is much more robust. If I don’t really have neck cancer and describe my actual symptoms, the doctor has a chance to help me discover my mistake.
Is your default model of LWians that most of them have this position?
No. But that’s OK with me, because I don’t regard “other people who use one of the same websites as me” as a generic authority figure.
it still seems like the causal stream here is clearly bad vibes → people complain to harbyka → Said gets in trouble?
Yes, that sounds right. As you’ve gathered, I want to delete the second arrow rather than altering the value of the “vibes” node.
No. But that’s OK with me, because I don’t regard “other people who use one of the same websites as me” as a generic authority figure.
Was definitely not going to make an argument from authority, just trying to understand your world view.
Iirc we’ve touched on four (increasingly strong) standards for truth
Don’t lie
(I won’t be the best at phrasing this) something like “don’t try to make someone believe things for reasons that have nothing to do with why you believe it”
Use only the arguments that convinced you (the one you mentioned here
Make sure the comment accurately reflects your emotional state[1] about the situation.
For me, I endorse #1, and about 80% endorse #2 (you said in an earlier comment that #1 is too weak, and I agree). #3 seems pretty bad to me because the most convincing arguments to me don’t have to be the most convincing arguments the others (and indeed, they’re often not), and the argument that persuaded me initially especially doesn’t need to be good. And #4 seems extremely counter-productive both because it’ll routinely make people angry and because so much of one’s state of mind at any point is determined by irrelevant variables. It seems only slightly less crazy than—and in fact very similar to—the radical honesty stuff. (Only in the most radical interpretation of #4 is like that, but as I said in the footnote, the most radical interpretation is what you used when you applied it to Said’s commenting style, so that’s the one I’m using here.)
Here’s an example from Ben Hoffman’s “The Humility Argument for Honesty” [...]
This is not a useful example though because it doesn’t differentiate between any two points on this 1-4 scale. You don’t even need to agree with #1 to realize that trying to convince the doctor is a bad idea; all you need to do is realize that they’re more competent than you at understanding symptoms. A non-naive purely impact based approach just describes symptoms honestly in this situation.
My sense is that examples that prefer something stronger than #2 will be hard to come up with. (Notably your argument for why a higher standard is better was itself consequentialist.)
Idk, I mean we’ve drifted pretty far off the original topic and we don’t have to talk any more about this if you’re not interested (and also you’ve already been patient in describing your model). I’m just getting this feeling—vibe! -- of “hmm no this doesn’t seem quite right, I don’t think Zack genuinely believed #1-#4 all this time and everything was upstream of that, this position is too extreme and doesn’t really align with the earliest comment about the moderation debate, I think there’s still some misunderstanding here somewhere”, so my instinct is to dig a little deeper to really get your position. Although I could be wrong, too. In any case, like I said, feel free to end the conversation here.
Re-reading this comment again, you said ‘thought’, which maybe I should have criticized because it’s not a thought. How annoyed you are by something isn’t an intellectual position, it’s a feeling. It’s influenced by beliefs about the thing, but also by unrelated things like how you’re feeling about the person you’re talking to (RE what I’ve demonstrated with Said).
Was definitely not going to make an argument from authority, just trying to understand your world view.
Right. Sorry, I think I uncharitably interpreted “Do you think others agree?” as an implied “Who are you to disagree with others?”, but you’ve earned more charity than that. (Or if it’s odd to speak of “earning” charity, say that I unjustly misinterpreted it.)
the argument that persuaded me initially especially doesn’t need to be good
you said ‘thought’, which maybe I should have criticized because it’s not a thought. How annoyed you are by something isn’t an intellectual position, it’s a feeling. It’s influenced by beliefs about the thing, but also by unrelated things
There’s probably a crux somewhere near here. Your formulation of #4 seems bad because, indeed, my emotions shouldn’t be directly relevant to an intellectual discussion of some topic. But I don’t think that gives you license to say, “Ah, if emotions aren’t relevant, therefore no harm is done by rewriting your comments to be nicer,” because, as I’ve said, I think the nicewashing does end up distorting the content. The feelings are downstream of the beliefs and can’t be changed arbitrarily.
It’s influenced by beliefs about the thing, but also by unrelated things like how you’re feeling about the person you’re talking to (RE what I’ve demonstrated with Said).
I want to note that I dispute that you demonstrated this.
At this point, my mental model of you responds with something like
You’re probably correct on the consequential analysis (i.e., the softened version would be more likely to be persuasive)[1], but I don’t think it follows that we as a community should therefore moderate vibes because [very eloquently argued case about censorship being bad that I won’t try to replicate here]
If you don’t think this, then that would be a crux, but also I’d be very surprised and, not sure how I’d continue the conversion then, but for now I’m not thinking too much about this.
FWIW, I absolutely do not think that the “softened” version would be more likely to be persuasive. (I think that the “softened” version is much worse, even more so than Zack does.)
Thinking this carefully about the content of your messages takes a lot of time. Said doesn’t take this much time for his comments, which is how he can respond so quickly.
Consider a very short post (or comment), which—briefly, elegantly, with a minimum of words—expresses some transformative idea, or makes some stunningly incisive point. Forget, for now, the question of its quality, and consider instead: how much effort went into writing it? Do you tally up only the keystrokes? Or do you count also the years of thought and experience and work that allowed the writer to come up with this idea, and this sequence of words to express it? Do you count the knowledge of a lifetime?
There’s maybe a stronger definition of “vibes” than Rafael’s “how it makes the reader feel”, that’s something like “the mental model of the kind of person who would post a comment with this content, in this context, worded like this”. A reader might be violently allergic to eggplants and would then feel nauseous when reading a comment about cooking with eggplants, but it feels obvious it wouldn’t then make sense to say the eggplant cooking comment had “bad vibes”.
Meanwhile if a poster keeps trying to use esoteric Marxist analysis to show how dolphin telepathy explains UFO phenomena, you’re might start subconsciously putting the clues together and thinking “isn’t this exactly what a crypto-Posadist would be saying”. Now we’ve got vibes. Generally, you build a model, consciously or unconsciously, about what the person is like and why they’re writing the things they do, and then “vibes” are the valence of what the model-person feels like to you. “Bad vibes” can then be things like “my model of this person has hidden intentions I don’t like”, “my model of this person has a style of engagement I find consistently unpleasant” or “my model is that this person is mentally unstable and possibly dangerous to be around”.
This is still somewhat subjective, but feels less so than “how the comment makes the reader feel like”. Building the model of the person based on the text is inexact, but it isn’t arbitrary. There generally needs to be something in the text or the overall situation to support model-building, and there’s a sense that the models are tracking some kind of reality, even though inferences can go wrong, different people can pay attention to very different things. There’s still another complication that different people also disagree on goals or styles of engagement, so they might be building the same model and disagree on the “vibes” of it. This still isn’t completely arbitrary, most people tend to agree that the “mentally unstable and possibly dangerous to be around” model has bad vibes.
Basically the sum of what a post or comment will make the reader feel. (This is not the actual definition because the actual definition would require me to explain what I think a vibe is at the level of the brain, but it’s good enough.)
Technically this is a two-place function of post and reader because two different people can feel very different things from reading the same thing, so strictly speaking it doesn’t make sense to say that a comment has bad vibes. But in practice it’s highly correlated. So when I say this comment has bad vibes, it’s short for, “it will have bad vibes for most readers”, which I guess is in turn short for, “most people who read this will feel things that are detrimental for having a good discussion”.
To give the most obvious example in the specific comment, the sentence
This is very straightforwardly a bad thing.
sounds very combative (i.e., will generally will evoke adversarial feelings). And tbc this will also be true for people who aren’t the author because we’ve evolved to simulate how others feel; that’s why you can feel awkward watching an awkward scene in a movie.
BTW I think asking me what I mean by vibes is completely reasonable. Someone strong-downvoted your comment I guess because it sounds pedantic but I don’t agree with this, I don’t think this is a case where the concept so obvious that you shouldn’t ask for a definition. (I strong-upvoted back to 0.)
Well, I think that the concept of “vibes” (of a comment), as you are using the term to mean, is fundamentally a broken one, because it abstracts away from highly relevant causal factors.
Here’s why I say that. You say:
Technically this is a two-place function of post and reader because two different people can feel very different things from reading the same thing, so strictly speaking it doesn’t make sense to say that a comment has bad vibes. But in practice it’s highly correlated. So when I say this comment has bad vibes, it’s short for, “it will have bad vibes for most readers”, which I guess is in turn short for, “most people who read this will feel things that are detrimental for having a good discussion”.
And there are two problems with this.
First, you correctly acknowledge that different readers can have different reactions, but your dismissal of this objection with the claim that “it’s highly correlated” is a mistake, for the simple reason that the variation in reactions is not randomly distributed across readers along relevant dimensions. On the contrary, it’s highly correlated with a variety of qualities which we have excellent reason to care about (and which we might collectively summarize as “likelihood of usefully contributing to advancement of rationality and the accomplishment of useful goals”).
Second, whether there is in fact some connection (and what that connection is) between whether some comment “sounds very combative”, and whether that comment “will generally evoke adversarial feelings” (these are in fact two different things, not one thing phrased in two different ways!), and between the latter and whether a good discussion ensues, are not immutable facts! They are amenable to volitional alteration, i.e. you can choose how (or if!) these things affect one another, because you do in fact (I assume) have control of your actions, your words, your reasoning process, etc. (And to the extent that you do not have such control—well, that is a flaw, which you ought to be trying to fix. Or so I claim! Perhaps you disagree; but in order for us to resolve this disagreement, we must be able to refer to it—which we cannot do if we simply encode, in the term “vibes”, the assumption that the model I describe here is wrong.)
To speak of the “vibes” of a comment abstracts away from (and thus obscures) this critical structure in the patterns of how people react to comments.
P.S.:
Someone strong-downvoted your comment
It’s not “someone”, it’s very obviously @habryka. (Who else would strong-downvote all of my comments on this post, so consistently and so quickly after they get posted, and with a vote weight of 10—if not the person who gets notifications whenever comments get posted on this post, and who in fact has a vote weight of 10?)
On the contrary, it’s highly correlated with a variety of qualities which we have excellent reason to care about (and which we might collectively summarize as “likelihood of usefully contributing to advancement of rationality and the accomplishment of useful goals”).
I definitely don’t agree with this. Especially in this particular case, I think almost everyone will have the same reaction, and I don’t think people who don’t have this reaction are meaningfully better at rationality. (In general, I don’t think the way to improve your rationality is to make yourself as numb as possible.)
these are in fact two different things, not one thing phrased in two different ways!
that’s because I phrased it poorly. I was trying to gesture at the same feeling with both, I just don’t know what to call it. Like, the feeling that the situation you’re in has become adversarial. I think it’s a weaker version of what you’d feel if you were in a group conversation and suddenly one person insults someone else, or something like that.
They are amenable to volitional alteration
I completely agree with this, but “you can theoretically train yourself to not be bothered by it” is true for a lot things, and no one thinks that we should therefore give people a free pass to do them. You can train yourself to have equanimity to physical pain; presumably this wouldn’t make it okay for me to inflict physical pain on you. You need more pieces to argue that we should ask people to self-modify to not have the reaction, rather than avoid triggering the reaction.
In this case, that strikes as not reasonable. This particular reaction (i.e., having this adversarial feeling that I failed to describe well in response to the line I quoted) seems both very hard to get rid of, and probably not desirable to get rid of. There’s a very good evolutionary reason why we have it, to detect conflict, and still seems pretty valuable today. I think I’m unusually sensitive to this vibe, and I think this is pretty useful to navigate social situations. Spotting potential conflict early is useful, this stuff is relevant information.
I definitely don’t agree with this. Especially in this particular case, I think almost everyone will have the same reaction
This may well be true, but surely you see that the “almost” is doing quite a bit of work here, yes?
I mean, think of all the true statements we might make, of the form “Almost everyone will X”. And now consider how many of them stop being true if we quantify “everyone” not over the population of the Earth, but over the commentariat of this forum. There’s a lot of those!
So, is your claim here one of the latter sort? Surely we can’t assume that it isn’t, right?
And even supposing that it’s not, we still have this—
and I don’t think people who don’t have this reaction are meaningfully better at rationality.
What makes one better at rationality is behaving as if one does not have said reaction (or any reaction at all). Whether that’s because the reaction is absent, or because it’s present but controlled, is not really important.
(In general, I don’t think the way to improve your rationality is to make yourself as numb as possible.)
I wholly reject this framing. This is just a thoroughly tendentious way of putting things. We are not talking about some important information which you’re being asked to ignore. We’re talking about having an emotional reaction which interferes with your ability to consider what is being said to you. The ability to not suffer that detrimental effect is not “numbness”.
I was trying to gesture at the same feeling with both, I just don’t know what to call it. Like, the feeling that the situation you’re in has become adversarial. I think it’s a weaker version of what you’d feel if you were in a group conversation and suddenly one person insults someone else, or something like that.
Right, but the key point here is that the sentence you quoted isn’t actually anything like one person insulting someone else. You say “weaker version”, but that’s underselling the difference, which is one of kind, not merely of degree.
I’ve said something like this before, but it really bears repeating: if someone reads a paragraph like this one—
It is bad to participate in organized religion, because you are thereby exposing yourself to intense social pressure to believe false things (and very harmful false things, at that). This is very straightforwardly a bad thing.
—and experiences this as something akin to a personal insult, which seriously impacts their ability to participate in the conversation, then this person is simply not ready to participate in any kind of serious discussion, period. This is the reaction of a child, or of someone who hasn’t ever had to have any kind of serious adult conversation. Being able to deal with straightforward statements like this is a very low bar. It’s a low bar even for many ordinary professional contexts, never mind for Less Wrong (where the bar should be higher).
They are amenable to volitional alteration
I completely agree with this, but “you can theoretically train yourself to not be bothered by it” is true for a lot things, and no one thinks that we should therefore give people a free pass to do them. You can train yourself to have equanimity to physical pain; presumably this wouldn’t make it okay for me to inflict physical pain on you. You need more pieces to argue that we should ask people to self-modify to not have the reaction, rather than avoid triggering the reaction.
Of course, but the pieces in question seem rather obvious to me. But sure, let’s make them explicit:
You punching me doesn’t meaningfully contribute anything to the discussion; it doesn’t communicate anything of substance. Conversely, the sort of comment we’re discussing is the most effective and efficient way of communicating the relevant object-level point.
You punching me is a unilateral action on your part, which I cannot avoid (presumably; if I consent to the punch then that’s a very different matter, obviously). On the other hand, nobody’s forcing you to read anything on Less Wrong.
There’s no “theoretically” about it; it’s very easy to not be bothered by this sort of thing (indeed, I expect that when being bothered by comments like the example at hand is not rewarded with status, most people simply stop being bothered by them, without any effort on their part). (Contrast this with “train[ing] yourself to have equanimity to physical pain”, which is, as far as I know, not easy.)
Not being bothered by this sort of thing is good (cf. the earlier parts of this comment); being bothered by it is bad. Conversely, not being bothered by pain is probably bad (depending on what exactly that involves).
Finally, please note that “we should ask people to self-modify to not have the reaction” is a formulation which presupposes a corrective approach. I do not claim that corrective approaches are necessarily the wrong ones in this case, but there is no reason to assume that they’re the best ones, much less the only ones. Selective (and, to a lesser extent, structural) approaches are at least as likely as corrective ones to play a major role.
In this case, that strikes as not reasonable. This particular reaction (i.e., having this adversarial feeling that I failed to describe well in response to the line I quoted) seems both very hard to get rid of, and probably not desirable to get rid of.
I strongly disagree with both parts of this claim. (See above.)
There’s a very good evolutionary reason why we have it, to detect conflict, and still seems pretty valuable today. I think I’m unusually sensitive to this vibe, and I think this is pretty useful to navigate social situations. Spotting potential conflict early is useful, this stuff is relevant information.
But that’s just the thing: you shouldn’t be thinking of object-level discussions on LW as “social situations” which you need to “navigate”. If that’s how you’re approaching things, then of course you’re going to have all of these reactions—and you’ve doomed the whole enterprise right from the start! You’re operating on too high a simulacrum level. No useful intellectual work will get done that way.
And now consider how many of them stop being true if we quantify “everyone” not over the population of the Earth, but over the commentariat of this forum.
I was actually already thinking about just people on LessWrong when I wrote that. I think it’s almost everyone on LessWrong.
What makes one better at rationality is behaving as if one does not have said reaction
We’re talking about having an emotional reaction which interferes with your ability to consider what is being said to you. The ability to not suffer that detrimental effect is not “numbness”.
then this person is simply not ready to participate in any kind of serious discussion, period.
Not being bothered by this sort of thing is good (cf. the earlier parts of this comment); being bothered by it is bad.
Right, I mean, you’re repeatedly and categorically framing the problem as solely lying with the person who gets bothered by emotions. You’ve done the same in the previous post where I opted out of the discussion.
It’s not my view at all. I think a community will achieve much better outcomes if being bothered by the example message is considered normal and acceptable, and writing the example message is considered bad.
I don’t know how to proceed from here. Note that I’m not trying to convince you, I’m only responding. What I can say is, if you are trying to convince me, you have to do something other than in this comment, because I felt like you primarily told me things that I already understood from the other comment thread (where I truncated the discussion). In particular, there are a lot of times where you’re just stating something as if you expect me to agree with it (like all the instances I quoted), but I don’t—and again I feel like I already knew that you think this from the other comment.
For completeness:
#1-#2
This argues that the pain thing is different; I agree it’s different; it doesn’t mean that self-modificaiton (or selection) is desirable here.
it’s very easy to not be bothered by this sort of thing (#3)
I already said that I think ~everyone is bothered by it, so obviously, disagree. (I don’t even believe that you’re not bothered by this kind of thing;[1] I think you are and it does change your conduct as well, although I totally believe that you believe you’re not bothered.)
Not being bothered by this sort of thing is good
Actually I technically do agree with this—in the sense that, if you could flip a switch where you’re not bothered by it but you still notice the vibe, that would be good—but I think it’s not practically achievable so it doesn’t really matter.
This is something I usually wouldn’t say out of politeness/vibe protection, but since you don’t think I should be doing that, saying it kind of feels more respectful, idk.
It’s not my view at all. I think a community will achieve much better outcomes if being bothered by the example message is considered normal and acceptable, and writing the example message is considered bad.
That’s a strange position to hold on LW, where it has long been a core tenet that one should not be bothered by messages like that. And that has always been the case, whether it was LW2, LW1 (remember, say, ‘babyeaters’? or ‘decoupling’? or Methods of Rationality), Overcoming Bias (Hanson, ‘politics is the mindkiller’), SL4 (‘Crocker’s Rules’) etc.
I can definitely say on my own part that nothing of major value I have done as a writer online—whether it was popularizing Bitcoin or darknet markets or the embryo selection analysis or writing ‘The Scaling Hypothesis’—would have been done if I had cared too much about “vibes” or how it made the reader feel. (Many of the things I have written definitely did make a lot of readers feel bad. And they should have. There is something wrong with you if you can read, say, ‘Scaling Hypothesis’ and not feel bad. I myself regularly feel bad about it! But that’s not a bad thing.) Even my Wikipedia editing earned me doxes and death threats.
And this is because (among many other reasons) emotional reactions are inextricably tied up with manipulation, politics, and status—which are the very last things you want in a site dedicated to speculative discussion and far-out unpopular ideas, which will definitionally be ‘creepy’, ‘icky’, ‘cringe’, ‘fringe’, ‘evil’, ‘bad vibes’ etc. (Even the most brutal totalitarian dictatorships concede this when they set up free speech zones and safe spaces like the ‘science cities’.)
Could being “status-blind” in the sense that Eliezer claims to be (or perhaps some other not yet well-understood status-related property) be strongly correlated to managing to create lots of utility? (In the sense of helping the world a lot).
Currently I consider Yudkowsky, Scott Alexander, and Nick Bostrom to be three of the most important people. After reading Superintelligence and watching a bunch of interviews, one of first things I said about Nick Bostrom to a friend was that I felt like he legitimately has almost no status concerns (that was well before LW 2.0 launched). In case of S/A it’s less clear, but I suspect similar things.
Many of our ideas and people are (much) higher status than they used to be. It is no surprise people here might care more about status than they used to, in the same way that rich people care more about taxes than poor people.
But they were willing to be status-blind and not prize emotionality, and that is why they could become high-status. And barring the sudden discovery of an infallible oracle, we can continue to expect future high-status things to start off low-status...
This doesn’t feel like it engages with anything I believe. None of the things you listed are things I object to. I don’t object to how you wrote the the Scaling Hypothesis post, I don’t object to the Baby Eaters, I super don’t object to decoupling, and I super extra don’t object to ‘politics is the mind-killer’. The only one I’d even have to think about is Crocker’s Rules, but I don’t think I have an issue with those, either. They’re notably something you opt into.
I can definitely say on my own part that nothing of major value I have done as a writer online—whether it was popularizing Bitcoin or darknet markets or the embryo selection analysis or writing ‘The Scaling Hypothesis’—would have been done if I had cared too much about “vibes” or how it made the reader feel. (Many of the things I have written definitely did make a lot of readers feel bad. And they should have. There is something wrong with you if you can read, say, ‘Scaling Hypothesis’ and not feel bad. I myself regularly feel bad about it! But that’s not a bad thing.) Even my Wikipedia editing earned me doxes and death threats.
I claim that Said’s post is bad because it can be rewritten into a post that fulfills the same function but doesn’t feel as offensive.[1] Nothing analogous is true for the Scaling Hypothesis. And it’s not just that you couldn’t rewrite it to be less scary but convey the same ideas; rather the whole comparison in a non-starter because I don’t think that your post on the scaling hypothesis has bad vibes, at all. If memory serves (I didn’t read your post in its entirety back then, but I read some of it and I have some memory of how I reacted), it sparks a kind of “holy shit this is happening and extremely scary ---(.Ó﹏Ò.)” reaction. This is, like, actively good. It’s not in the same category as Said’s comment in any way whatsoever.
[...] on LW, where it has long been a core tenet that one should not be bothered by messages like that.
I agree that it is better to to not be bothered. My position is not “you should be more influenced by vibes”, it’s something like “in the real world vibes are about 80% of the causal factors behind most people’s comments on LW and about 95% outside of LW, and considering this fact about how brains work in how you write is going to be good, not bad”. In particular, as I described in my latest response to Zack, I claim that the comments that I actually end up leaving on this site are significantly less influenced by vibes than Said’s because recognizing what my brain does allows me to reject it if I want to. Someone who earnestly believes to be vibe-blind while not being vibe-blind at all can’t do that.
Someone once wrote, upon being newly arrived to LW, a good observation of the local culture about how this works [...]
This honestly just doesn’t seem related, either. Status-blindness is more specific than vibe-blindness, and even if vibe-blindness were a thing, it wouldn’t contradict anything I’ve argued for.
it is not identical in terms of content, as Zack pointed out, but here I’m using function in the sense of the good thing the post comment achieves, which is to leave a strongly worded and valid criticism of the post. (In actual fact, I think my version is significantly more effective at doing that.)
I claim that Said’s post is bad because it can be rewritten into a post that fulfills the same function but doesn’t feel as offensive.[1] Nothing analogous is true for the Scaling Hypothesis. And it’s not just that you couldn’t rewrite it to be less scary but convey the same ideas; rather the whole comparison in a non-starter because I don’t think that your post on the scaling hypothesis has bad vibes, at all. If memory serves (I didn’t read your post in its entirety back then, but I read some of it and I have some memory of how I reacted), it sparks a kind of “holy shit this is happening and extremely scary ---(.Ó﹏Ò.)” reaction. This is, like, actively good
This description of ‘bad vibes’ vs ‘good vibes’ and what could be ‘be rewritten into a post that fulfills the same function’, is confusing to me because I would have said that that is obviously untrue of Scaling Hypothesis (and as the author, I should hope I would know), and that was why I highlighted it as an example: aside from the bad news being delivered in it, I wrote a lot of it to be deliberately rude and offensive—and those were some of the most effective parts of it! (And also, yes, made people mad at me.) Just because the essay was effective and is now high-status doesn’t change that. It couldn’t’ve been rewritten and achieved the same outcome, because that was much of the point.
(To be clear, my take on all of this is that it is often appropriate to be rude and offensive, and often inappropriate. What has made these discussions so frustrating is that Said continues to insist that no rudeness or offensiveness is present in any of his writing, which makes it impossible to have a conversation about whether the rudeness of offensiveness is appropriate in the relevant context.
Like, yeah, LessWrong has a culture, a lot of which is determined by what things people are rude and offensive towards. One of my jobs as a moderator is to steer where that goes. If someone keeps being rude and offensive towards things I really want to cultivate on the site, I will tell them to stop, or at least provide arguments for why this thing that I do not think is worth scorn, deserves scorn.
But if that person then insists that no rudeness or offensiveness was present in any of their writing, despite an overwhelming fraction of readers reading it as such, then they are either a writer so bad at communication as to not belong on the site, or trying to avoid accountability for the content of their messages, both of which leave little room but to take moderation action that limits their contributions to the site)
When you say that “it is often appropriate to be rude and offensive”, and that LW culture admits of things toward which it is acceptable to be “rude and offensive”, this would seem to imply that the alleged rudeness and offensiveness as such is not the problem with my comments, but rather that the problem is what I am supposedly being rude and offensive towards; and that the alleged “rudeness and offensiveness” would not itself ever be used against me (and that if a moderator tried to claim that “rudeness and offensiveness” is itself punishable regardless of target, or if a user tried to claim that LW norms forbid being rude and offensive, then you’d show up and say “nope, wrong, actually being rude and offensive is fine as long as it’s toward the right things, so kindly withdraw that particular criticism; Said has violated no rules or norms being being rude and offensive as such”). True? Or not?
Yep, though of course there are priors. The thing I am saying is that there are at least some things (and not just an extremely small set of things) that it is OK to be rude towards, not that the average quality/value-produced of rude and non-rude content is the same.
For enforcement efficiency reasons, culture schelling point reasons, and various other reasons, it might still make sense to place something like a burden of proof on the person who claims that in this case rudeness and offensiveness is appropriate, so enforcement for rudeness without justification might still make sense, and my guess is does indeed make sense.
Also, for you in-particular, I have seen the things that you tend to be rude and offensive towards, at least historically, and haven’t been very happy about that, and so the prior is more skewed against that. My guess is I would tell you in-particular that you have a bad track of aiming it well, and so would request additional justification on the marginal case from your side (similar to how we generally treat repeat criminal offenders different from first-time offenders, and often declare whole sets of actions that are otherwise completely legal from their option pool in prevention of future harm).
For enforcement efficiency reasons, culture schelling point reasons, and various other reasons, it might still make sense to place something like a burden of proof on the person who claims that in this case rudeness and offensiveness is appropriate, so enforcement for rudeness without justification might still make sense, and my guess is does indeed make sense.
… ah. So, less “yep” and more “nope”.
On the other hand, maybe this “burden of proof” business isn’t so bad. Actually, I was just reading your comments on the recent post about eating honey, including this top-level comment where you say that the ideas in the OP “sound approximately insane”, that they’re “so many orders of magnitude away from what sounds reasonable” that you cannot but seriously entertain the notion that said ideas were not motivated by reasonably thinking about the topic, but rather by “social signaling madness where someone is trying to signal commitment to some group standard of dedication”.
I thought that it was a good comment, personally. (Actually, I found basically all your comments on that post to be upvote-worthy.) That comment is currently at 47 karma, so it would seem that there’s more or less a consensus among LW users that it’s a good comment. I did see that you edited the comment (after I’d initially read and upvoted it) to include somewhat of a disclaimer:
Edit: And to avoid a slipping of local norms here. I am only leaving this comment here now after I have seriously entertained the hypothesis that I might be wrong, that maybe there do exist good arguments for moral weights that seem crazy to from where I was originally, but no, after looking into the arguments for quite a while, they still seem crazy to me, and so now I feel comfortable moving on and trying to think about what psychological or social process produces posts like this. And still, I am hesitant about it, because many readers have probably not gone through the same journey, and I don’t want a culture of dismissing things just because they are big and would imply drastic actions.
Is this the sort of thing that you have in mind, when you talk about burden of proof?
If I include disclaimers like this at the end of all of my comments, does that suffice to solve of all of the problems that you perceive in said comments? (And can I then be as “rude and offensive” as I like? Hypothetically, that is. If I were inclined to be “rude and offensive”.)
Is this the sort of thing that you have in mind, when you talk about burden of proof?
Yes-ish, though I doubt we have a shared understanding of what “that sort of thing” is.
If I include disclaimers like this at the end of all of my comments, does that suffice to solve of all of the problems that you perceive in said comments? (And can I then be as “rude and offensive” as I like? Hypothetically, that is. If I were inclined to be “rude and offensive”.)
No, of course not. As I explained, as moderator and admin I will curate or at least apply heavy pressure on which things receive scorn and rudeness on LW.
A disclaimer is the start of an argument. If the argument is wrong by my lights, you will still get told off. The standard is not “needs to make an argument”, it’s (if anything) “needs to make an argument that I[1] think is good”. Making an argument is not in itself something that does something.
(Not necessarily just me, there are other mods, and a kind of complicated social process that involves many stakeholders that can override me, or I will try to take into account and integrate, but for the sake of conversation we can assume it’s “me”)
Who decides if the argument suffices? You and the other mods, presumably? (EDIT: Confirmed by subsequent edit to parent comment.)
If so, then could you explain how this doesn’t end up amounting to “the LW mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”? Because that’s what it seems like you have to do, in order for your policy to make any sense.
EDIT: Could you expand on “a kind of complicated social process that involves many stakeholders that can override me”? I don’t know what you mean by this.
At the end of the day, I[1] have the keys to the database and the domain, so in some sense anything that leaves me with those keys can be summarized as “the LW mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”.
But of course, that is largely semantic. It is of course not the case that I have or would ever intend to make a list of allowed or forbidden opinions on LessWrong. In contrast, I have mostly procedural models about how LessWrong should function, including the importance of LW as a free marketplace of ideas, a place where contradicting ideas can be discussed and debated, and many other aspects of what will cause the whole LW project to go well. Expanding on all of them would of course far exceed this comment thread.
On the specific topic of which things deserve scorn or ridicule or rudeness, I also find it hard to give a very short summary of what I believe. We have litigated some past disagreements in the space (such as whether people using their moderation tools to ban others from their blogpost should be subject to scorn or ridicule in most cases), which can provide some guidance, though the breadth of things we’ve covered is fairly limited. It is also clear to me that the exact flavor of rudeness and aggressiveness matters quite a bit. I favor straightforward aggression over passive aggression, and have expressed my model that “sneering” as a mental motion is almost never appropriate (though not literally never, as I expanded on).
And on most topics, I simply don’t know yet, and I’ll have to figure it out as it comes up. The space of ways people can be helpfully or unhelpfully judgmental and aggressive is very large and big, and I do not have most of it precomputed. I do have many more principles I could expand on, and would like to do sometime, but this specific comment thread does not seem like the time.
At the end of the day, I[1] have the keys to the database and the domain, so in some sense anything that leaves me with those keys can be summarized as “the LW mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”.
It seems clear that your “in some sense” is doing pretty much all the work here.
Compare, again, to Data Secrets Lox: there, I have the keys to the database and the domain (and in the case of DSL, it really is just me, no one else—the domain is just mine, the database is just mine, the server config passwords… everything), and yet I don’t undertake to decide anything at all, because I have gone to great lengths to formally surrender all moderation powers (retaining only the power of deleting outright illegal content). I don’t make the rules; I don’t enforce the rules; I don’t pick the people who make or enforce the rules. (Indeed the moderators—which were chosen via to the system that I put into place—can even temp-ban me, from my own forum, that I own and run and pay for with my own personal money! And they have! And that is as it should be.)
I say this not to suggest that LW should be run the way that DSL is run (that wouldn’t really make sense, or work, or be appropriate), but to point out that obviously there is a spectrum of the degree to which having “the keys to the database and the domain” can, in fact, be meaningfully and accurately talked about as “the … mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”—and you are way, way further along that spectrum than the minimal possible value thereof. In other words, it is completely possible to hold said keys, and yet (compared to how you run LW) not, in any meaningful sense, undertake to unilaterally decide anything w.r.t. correctness of views and positions.
It is of course not the case that I have or would ever intend to make a list of allowed or forbidden opinions on LessWrong. In contrast, I have mostly procedural models about how LessWrong should function, including the importance of LW as a free marketplace of ideas, a place where contradicting ideas can be discussed and debated, and many other aspects of what will cause the whole LW project to go well. Expanding on all of them would of course far exceed this comment thread.
Yes, well… the problem is that this is the central issue in this whole dispute (such as it is). The whole point is that your preferred policies (the ones to which I object) directly and severely damage LW’s ability to be “a free marketplace of ideas, a place where contradicting ideas can be discussed and debated”, and instead constitute you effectively making a list of allowed or forbidden opinions on this forum. Like… that’s pretty much the whole thing, right there. You seem to want to make that list while claiming that you’re not making any such list, and to prevent the marketplace of ideas from happening while claiming that the marketplace of ideas is important. I don’t see how you can square this circle. Your preferred policies seem to be fundamentally at odds with your stated goals.
Yes, well… the problem is that this is the central issue in this whole dispute (such as it is). The whole point is that your preferred policies (the ones to which I object) directly and severely damage LW’s ability to be “a free marketplace of ideas, a place where contradicting ideas can be discussed and debated”, and instead constitute you effectively making a list of allowed or forbidden opinions on this forum.
I don’t see where I am making any such list, unless you mean “list” in a weird way that doesn’t involve any actual lists, or even things that are kind of like lists.
in any meaningful sense, undertake to unilaterally decide anything w.r.t. correctness of views and positions.
I don’t think that’s an accurate description of DSL, indeed it appears to me that what the de-facto list of the kind of policy you have chosen is is pretty predictable (and IMO does not result in particular good outcomes). Just because you have some other people make the choices doesn’t change the predictability of the actual outcome, or who is responsible for it.
I already made the obvious point that of course, in some sense, I/we will define what is OK on LessWrong via some procedural way. You can dislike the way I/we do it.
There is definitely no “fundamentally at odds”, there is a difference in opinion about what works here, which you and me have already spent hundreds of hours trying to resolve, and we seem unlikely to resolve right now. Just making more comments stating that “I am wrong” in big words will not make that happen faster (or more likely to happen at all).
Seems like we got lost in a tangle of edits. I hope my comment clarifies sufficiently, as it is time for me to sleep, and I am somewhat unlikely to pick up this thread tomorrow.
Not going to go into this, since I think it’s actually a pretty complicated situation, but at a very high level some obvious groups that could override me:
The Lightcone Infrastructure board (me, Vaniver, Daniel Kokotajlo)
If Eliezer really wanted, he can probably override me
A more distributed consensus among what one might consider the leadership of the rationality community (like, let’s say Scott Alexander and Ryan Greenblatt and Buck and Nate and John Wentworth and Gwern all roughly agree on me messing up really badly)
There would be lots more to say on this topic, but as I said, I am unlikely to pick this thread up again, so I hope that’s good enough!
(This is a tangent to the thread and so I don’t plan to reply further on this, but I just wanted to mention that while I view Greenblatt and Shlegeris as stakeholders in LessWrong, a space they’ve made many great contributions to and are quite active in, I don’t view them as leadership of the rationality community.)
Rudeness and offensiveness are, in the general case, two-place functions: text can be offensive to some particular reader, but short of unambiguous blatant insults, there’s not going to be a consensus about what is “offensive”, because people vary widely (both by personal disposition and vagarious incentives) in how easy they are to offend.
When it is denied that Achmiz’s comments are offensive, the claim isn’t that no one is offended. (That would be silly. We have public testimony from people who are offended!) The claim is that the text isn’t rude in a “one-place” sense (no personal insults, &c.).
The reason that “one-place” rudeness is the relevant standard is because it would be bad if a fraction of easily-offended readers (even a substantial fraction—I don’t think you can defend the adjective “overwhelming”) could weaponize their emotions to censor expressions of ideas that they don’t like.
The comment is expressing an opinion about discourse norms (“There is always an obligation”) and a belief about what Bayesian inferences are warranted by the absence of replies to a question (“the author should be interpreted as ignorant”). It makes sense that many people disagree with that opinion and that belief (say, because they think that some of the questions that Achmiz thinks are good, are actually bad, and that ignoring bad questions is good). Fine.
But beyond mere disagreement, to characterize such a comment as offensive (because it criticizes people who don’t respond to questions), is something I find offensive. (If you’re thinking of allegedly worse behavior from Achmiz than this January 2020 comment, you’re going to need to provide the example.) Sometimes people who use the same website as you have opinions or beliefs that imply that they disapprove of your behavior! So what? I think grown-ups should be able to shrug this off without calling for draconian and deranged censorship policies. The mod team should not be pandering to such pathetic cry-bullying.
But beyond mere disagreement, to characterize such a comment as offensive (because it criticizes people who don’t respond to questions), is something I find offensive.
The comment is offensive because it communicates things other than its literal words. Autistically taking it apart word by word and saying that it only offends because it is criticism ignores this implicit communication.
Gwern himself refers to the “rude and offensive” part in this subthread as a one-place function:
aside from the bad news being delivered in it, I wrote a lot of it to be deliberately rude and offensive—and those were some of the most effective parts of it! (And also, yes, made people mad at me.)
I have no interest in doing more hand-wringing about whether Said’s comments are intended to make people feel judged or not, and don’t find your distinction of “no personal insults” as somehow making the rudeness more objective compelling. If you want we can talk about the Gwern hypothetical in which he clearly intended to be rude and offensive towards other people.
I think grown-ups should be able to shrug this off without calling for draconian and deranged censorship policies.
This is indeed a form of aggression and scorn that I do not approve of on this site, especially after extensive litigation.
I’ll leave it on this thread, but as a concrete example for the sake of setting clear guidelines, strawmanning all (or really any) authors who have preferences about people not being super aggro in their comment threads as “pathetic cry-bullying” and “calling for draconian and deranged censorship policies” is indeed one of the things that will get you banned from this site on other threads! You have been warned!
I don’t think the relevant dispute about rudeness/offensiveness is about one-place and two-place functions, I think it’s about passive/overt aggression. With passive aggression you often have to read more of the surrounding context to understand what is being communicated, whereas with overt aggression it’s clear if you just locally inspect the statement (or behavior), which sounds like one / two place functions (because ppl with different information states look at the same message and get different assessments), but isn’t.
For instance, suppose Alice doesn’t invite Bob to a party, and then Bob responds by ignoring all of Alice’s texts and avoiding eye contact most of the time. Now any single instance of “not responding to a text” isn’t aggression, but from the context of a chance in a relationship where it was typical to reply same-day, to zero replies, it can be understood as retalliation. And of course, even then it’s not provable, there are other possible explanations (such as Bob is taking a GLP-1 inhibitor and is quite low-energy at the minute don’t think too hard about why I picked that example), which makes it a great avenue for hard-to-litigate retaliation.
Does everyone here remember and/or agree with my point in The Nature of Offense, that offense is about status, which in the current context implies that it’s essentially impossible to avoid giving offense while delivering strong criticism (as it almost necessarily implies that the target of criticism deserves lower status for writing something seriously flawed, having false/harmful beliefs, etc.)? @habryka@Zack_M_Davis@Said Achmiz
This discussion has become very long and I’ve been travelling so I may have missed something, but has anyone managed to write a version of Said’s comment that delivers the same strength of criticism while avoiding offending its target? (Given the above, I think this would be impossible.)
Not a direct response, but I want to take some point in this discussion (I think I said this to Zack in-person the other day) to say that, while some people are arguing that things should as a rule be collaborative and not offensive (e.g. to varying extents Gordon and Rafael), this is not the position that the LW mods are arguing for. We’re arguing that authors on LessWrong should be able to moderate their posts with different norms/standards from one another, and that there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
I could see it being confusing because sometimes an author like Gordon is moderating you, and sometimes a site-mod like Habryka is moderating you, but they are using different standards, and the LW-mods are not typically endorsing the author standards as our own. I even generally agree with many of the counterarguments that e.g. Zack makes against those norms being the best ones. Some of my favorite comments on this site are offensive (where ‘offensive’ is referring to Wei’s meaning of ‘lowering someone’s social status’).
We’re arguing that authors on LessWrong should be able to moderate their posts with different norms/standards from one another, and that there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
What is currently the acceptable range of moderation norms/standards (according to the LW mod team)? For example if someone blatantly deletes/bans their most effective critics, is that acceptable? What if they instead subtly discourage critics (while being overtly neutral/welcoming) by selectively enforcing rules more stringently against their critics? What if they simply ban all “offensive” content, which as a side effect discourages critics (since as I mentioned earlier, criticism almostly inescapably implies offense)?
And what does “retribution or counter-punishment” mean? If I see an author doing one of the above, and question or criticize that in the comments or elsewhere, is that considered “retribution or counter-punishment” given that my comment/post is also inescapably offensive (status-lowering) toward the author?
What is currently the acceptable range of moderation norms/standards (according to the LW mod team)?
I think the first answer is “Mostly people aren’t using this feature, and the few times people have used it it has not felt to us like abuse or strongly needing to be pushed back on” so I don’t have any examples to point to.
But I’ll quickly generate thoughts on each of the hypothetical scenarios you briefly gestured to.
For example if someone blatantly deletes/bans their most effective critics, is that acceptable?
It’d depend on how things played out. If Andrew writes a blogpost with a big new theory of rationality, and then Bob and Charlie and Dave all write decisive critiques and then their comments are deleted and banned from commenting on his posts, I think it’s quite plausible that they’ll write a new post together with the copy-paste of their comments and it’ll get more karma than the original. This seems like a good-enough outcome to me. On the other hand if Andrew only gets criticism from Bob, and then deletes Bob’s comments and bans him from commenting on his posts, and then Bob leaves the site, I would take more active action, such as perhaps removing Andrew’s ability to ban people, and reaching out to Bob to thank him for his comments and encourage him to return.
What if they instead subtly discourage critics (while being overtly neutral/welcoming) by selectively enforcing rules more stringently against their critics?
That sounds like there’d be some increased friction on criticism. Hopefully we’d try to notice it and counteract it, or hopefully the commenters who were having annoying experience being moderated would notice and move to shortform or posts and do their criticism from there. But plausibly there’d just be some persistent additional annoyances or costs that certain users would have to pay.
What if they simply ban all “offensive” content, which as a side effect discourages critics (since as I mentioned earlier, criticism almostly inescapably implies offense)?
I mean, again, probably this would just be very incongruous with LessWrong and it wouldn’t really work and they’d have to ban like 30+ users because everyone wouldn’t get this and would keep doing things the author didn’t like, and the author wouldn’t eventually leave if they needed that sort of environment, or we’d step in after like 5 and say “this is kind of crazy, you have to stop doing this, it isn’t going to work out, we’re removing your ability to ban users”. So many of the good comments on LessWrong lower their interlocutor’s status in some way.
And what does “retribution or counter-punishment” mean?
It means actions that predictably make the author feel that them using the ban feature in general is illegitimate or that using it will cause them to have their reputation attacked, regardless of reason or context, in response to them using the ban feature.
If I see an author doing one of the above, and question or criticize that in the comments or elsewhere, is that considered “retribution or counter-punishment” given that my comment/post is also inescapably offensive (status-lowering) toward the author?
Many many writers on LessWrong are capable of critiquing a single instance of a ban while taking care to communicate that they are not pushing back on all instances of banning, and can also credibly offer support in other instances that are more reasonable.
Generally it is harder to signal this when you are complaining about your own banning. For in-person contexts (e.g. events) I generally spend effort to ensure that people do not feel any cost for not inviting me to events or spaces, and not expect that I will complain loudly or cause them to lose social status for it, and a similar (but not identical) heuristic applies here. If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
There is still good form and bad form to imposing costs on people for moderating their spaces, and costs imposed on people for moderating their spaces (based on disagreement or even trying to fix biases in the moderation) are the most common reason for good spaces not existing; moderation is unpleasant work, lots of people feel entitled to make strong social bids on you for your time and to threaten to attack your social standing, and I’ve seen many spaces degrade due to unwillingness to moderate. You should of course think about this if you are considering reliably complaining loudly every time anyone uses a ban feature on people.
Added: I hope you get a sense from reading this that your questions don’t have simple answers, but that the scenarios you describe require active steering depending on the dynamics at play. I am somewhat wary you will keep asking me a lot of short questions that, due to your inexperience moderating spaces, you will assume have simple answers, and I will have to do lots of work generating all the contexts to show how things play out, else Said or someone allied with him against him being moderated on LW will claim I am unable to answer the most basic of questions and this shows me to be either ignorant or incompetent. And, man, this is a lot of moderation discussion.
If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
If I was in this circumstance, I would be pretty worried about my own biases, and ask neutral or potentially less biased parties whether there might be more charitable and reasonable hypotheses why that person doesn’t want to interact with me. If there isn’t though, why shouldn’t I complain and e.g. make it common knowledge that my valuable criticism is being suppressed? (Obviously I would also take into consideration social/political realities, not make enemies I can’t afford to make, etc.)
I’ve seen many spaces degrade due to unwillingness to moderate
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost (to potential critics and people like me who really value criticism) of not knowing when their critiques might be unilaterally deleted or banned by post authors. I mean aside from the “benefit” of attracting/retaining the authors who demand such unilateral powers.
And, man, this is a lot of moderation discussion.
Aside from the above “benefit”, It seems like you’re currently getting the worst of both worlds: lack of significant usage and therefore potential positive effects, and lots of controversy when it is occasionally used. If you really thought this was an important feature for the long term health of the community, wouldn’t you do something to make it more popular? (Or have done it in the past 7 years since the feature came out?) But instead you (the mod team) seem content that few people use it, only coming out to defend the feature when people explicitly object to it. This only seems to make sense if the main motivation is again to attract/retain certain authors.
I am somewhat wary you will keep asking me a lot of short questions that, due to your inexperience moderating spaces, you will assume have simple answers, and I will have to do lots of work generating all the contexts to show how things play out
It seems like if you actually wanted or expected many people to use this feature, you would have written some guidelines on what people can and can’t do, or under what circumstances their moderation actions might be reversed by the site moderators. I don’t think I was expecting the answers to my questions to necessarily be simple, but rather that the answers already exist somewhere, at least in the form of general guidelines that might need to be interpreted to answer my specific questions.
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost
I mean, mostly we’ve decided to give the people who complain about moderation a shot, and compensate by spending much much more moderation effort from the moderators. My guess is this has cost a large amount of counterfactual quality of the site, many contributors, etc.
In-general, I find argument of the form “so to the extend that LW hasn’t been destroyed, X can’t be that valuable” pretty weak. It’s very hard to assess the counterfactual, and “if not X, LessWrong would have been completely destroyed” is rarely the case for almost any X that is in dispute.
My guess is LW would be a lot better if more people felt comfortable moderating things, and in the present world, there are a lot of costs born by the site admins that wouldn’t be necessary otherwise.
I mean, mostly we’ve decided to give the people who complain about moderation a shot
What do you mean by this? Until I read this sentence, I saw you as giving the people who demand unilateral moderation powers a shot, and denying the requests of people like me to reduce such powers.
My not very confident guess at this point is that if it weren’t for people like me, you would have pushed harder for people to moderate their own spaces more, perhaps by trying to publicly encourage this? And why did you decide to go against your own judgment on it, given that “people who complain about moderation” have no particular powers, except the power of persuasion (we’re not even threatening to leave the site!), and it seems like you were never persuaded?
My guess is LW would be a lot better if more people felt comfortable moderating things, and in the present world, there are a lot of costs born by the site admins that wouldn’t be necessary otherwise.
This seems implausible to me given my understanding of human nature (most people really hate to see/hear criticism) and history (few people can resist the temptation to shut down their critics when given the power and social license or cover to do so). If you want a taste of this, try asking DeepSeek some questions about the CCP.
But presumably you also know this (at least abstractly, but perhaps not as viscerally as I do, coming from a Chinese background, where even before the CCP, criticism in many situations was culturally/socially impossible), so I’m confused and curious why you believe what you do.
My guess is that you see a constant stream of bad comments, and wish you could outsource the burden of filtering them to post authors (or combine efforts to do more filtering). But as an occasional post author, my experience is that I’m not a reliable judge of what counts as a “bad comment”, e.g., I’m liable to view a critique as a low quality comment, only to change my mind later after seeing it upvoted and trying harder to understand/appreciate its point. Given this, I’m much more inclined to leave the moderation to the karma system, which seems to work well enough in leaving bad comments at low karma/visibility by not upvoting them, and even when it’s occasionally wrong, still provides a useful signal to me that many people share the same misunderstanding and it’s worth my time to try to correct (or maybe by engaging with it I find out that I still misjudged it).
But if you don’t think it works well enough… hmm I recall writing a post about moderation tech proposals in 2016 and maybe there has been newer ideas since then?
I mean, I have written like 50,000+ words about this at this point in various comment threads. About why I care about archipelagos, and why I think it’s hard and bad to try to have centralized control about culture, about how much people hate being in places with ambiguous norms, and many other things. I don’t fault you for not reading them all, but I have done a huge amount of exposition.
And why did you decide to go against your own judgment on it, given that “people who complain about moderation” have no particular powers, except the power of persuasion (we’re not even threatening to leave the site!), and it seems like you were never persuaded?
Because the only choice at this point would be to ban them, since they appear to be willing to take any remaining channel or any remaining opportunity to heap approximately as much scorn and snark and social punishment on anyone daring to do moderation they disagree with, and I value things like readthesequences.com and many other contributions from the relevant people enough that that seemed really costly and sad.
My guess is I will now do this, as it seems like the site doesn’t really have any other choice, and I am tired and have better things to do, but I think I was justified and right to be hesitant to do this for a while (though yes, in ex-post it would have obviously been better to just do that 5 years ago).
It seems to me there are plenty of options aside from centralized control and giving authors unilateral powers, and last I remember (i.e., at the end of this post) the mod team seems to be pivoting to other possibilities, some of which I would find much more reasonable/acceptable. I’m confused why you’re now so focused again on the model of authors-as-unilateral-moderators. Where have you explained this?
I have filled my interest in answering questions on this, so I’ll bow out and wish you good luck. Happy to chat some other time.
I don’t think we ever “pivoted to other possibilities” (Ray often makes posts with moderation things he is thinking about, and the post doesn’t say anything about pivoting). Digging up the exact comments on why ultimately there needs to be at least some authority vested in authors as moderators seems like it would take a while.
I meant pivot in the sense of “this doesn’t seem to be working well, we should seriously consider other possibilities” not “we’re definitely switching to a new moderation model”, but I now get that you disagree with Ray even about this.
Your comment under Ray’s post wrote:
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
This made me think you were also no longer very focused on the authors-as-unilateral-moderators model and was thinking more about subreddit-like models that Ray mentioned in his post.
BTW I’ve been thinking for a while that LW needs a better search, as I’ve also often been in the position being unable to find some comment I’ve written in the past.
Instead of one-on-one chats (or in addition to them), I think you should collect/organize your thoughts in a post or sequence, for a number of reasons including that you seem visibly frustrated that after having written 50k+ words on the topic, people like me still don’t know your reasons for preferring your solution.
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
Huh, ironically I now consider the AI Alignment Forum a pretty big mistake in how it’s structured (for reasons mostly orthogonal but not unrelated to this).
BTW I’ve been thinking for a while that LW needs a better search, as I’ve also often been in the position being unable to find some comment I’ve written in the past.
Agree.
Instead of one-on-one chats (or in addition to them), I think you should collect/organize your thoughts in a post or sequence, for a number of reasons including that you seem visibly frustrated that after having written 50k+ words on the topic, people like me still don’t know your reasons for preferring your solution.
I think I have elaborated non-trivially on my reasons in this thread, so I don’t really think it’s an issue of people not finding it.
I do still agree it would be good to do more sequences-like writing on it, though like, we are already speaking in the context of Ray having done that a bunch (referencing things like the Archipelago vision), and writing top-level content takes a lot of time and effort.
I think I have elaborated non-trivially on my reasons in this thread, so I don’t really think it’s an issue of people not finding it.
It’s largely an issue of lack of organization and conciseness (50k+ words is a minus, not a plus in my view), but also clearly an issue of “not finding it”, given that you couldn’t find an important comment of your own, one that (judging from your description of it) contains a core argument needed to understand your current insistence on authors-as-unilateral-moderators.
If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
If I was in this circumstance, I would be pretty worried about my own biases, and ask neutral or potentially less biased parties whether there might be more charitable and reasonable hypotheses why that person doesn’t want to interact with me. If there isn’t though, why shouldn’t I complain and e.g. make it common knowledge that my valuable criticism is being suppressed? (Obviously I would also take into consideration social/political realities, not make enemies I can’t afford to make, etc.)
I’m having a hard time seeing how this reply is hooking up to what I wrote. I didn’t say critics, I spoke much more generally. If someone wants to keep their distance from you because you have bad body odor, or because they think your job is unethical, and you either don’t know this or disagree, it’s pretty bad social form to go around loudly complaining every time they keep their distance from you. It makes it more socially costly for them to act in accordance with their preferences and makes a bunch of unnecessary social conflict. I’m pretty sure this is obvious and this doesn’t change if you’ve suddenly developed a ‘criticism’ of them.
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost (to potential critics and people like me who really value criticism) of not knowing when their critiques might be unilaterally deleted or banned by post authors. I mean aside from the “benefit” of attracting/retaining the authors who demand such unilateral powers.
I mean, I think it pretty plausible that LW would be doing even better than it is with more people doing more gardening and making more moderated spaces within it, archipelago-style.
I read you questioning my honesty and motivations a bunch (e.g. you have a few times mentioning that I probably only care about this because of status reasons I cannot mention or to attract certain authors and that my behavior is not consistent with believing in users moderating their own posts being a good idea) which are of course fine hypotheses for you to consider. After spending probably over 40 hours writing this month explaining why I think authors moderating their posts is a good idea and making some defense of myself and my reasoning, I think I’ve done my duty in showing up to engage with this semi-prosecution for the time being, and will let ppl come to their own conclusions. (Perhaps I will write up a summary of the discussion at some point.)
and there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
Great, so all you need to do is make a rule specifying what speech constitutes “retribution” or “counterpunishment” that you want to censor on those grounds.
Maybe the rule could be something like, “No complaining about being banned by a specific user (but commenting on your own shortform strictly about the substance of a post that you’ve been banned from does not itself constitute complaining about the ban)” or “No arguing against the existence on the user ban feature except in designated moderation threads (which get algorithmically deprioritized in the new Feed).”
It’s your website! You have all the hard power! You can use the hard power to make the rules you want, and then the users of the website have a clear choice to either obey the rules or be banned from the site. Fine.
What I find hard to understand is why the mod team seems to think it’s good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced. Telling people to “stop optimizing in a fairly deep way” is not a rule because of how vague and potentially all-encompassing it is. Telling people to avoid “mak[ing] people feel judged or not” is not a rule because I don’t have control over how other people feel.
“Don’t tell people ‘I’m judging you about X’” is a rule. I can do that.
What I can’t do is convincingly pretend to be a person with a completely different personality such that people who are smart about subtext can’t even guess from subtle details of my writing style that I might privately be judging them.
I mean, maybe I could if I tried very hard? But I have too much self-respect to try. If the mod team wants to force temperamentally judgemental people to convincingly pretend to be non-judgemental, that seems really crazy.
I know, the mods didn’t say “We want temperamentally judgemental people to convincingly pretend to have a completely different personality” in those words; rather, Habryka said he wanted to “avoid a passive aggressive culture tak[ing] hold”. I just don’t see what the difference is supposed to be in practice.
Mm, I think sometimes I’d rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
A key question is: Are authors comfortable using the mod tools the site gives them to garden their posts?
You can write lots of judgmental comments criticizing an author’s posts, and then they can ban you from their comments because they find engaging with you to be exhausting, and then you can make a shortform where you and your friends call them a coward, and then they stop using the mod tools (and other authors do too) out of a fear that using the mod tools will result in a group of people getting together to bully and call them names in front of the author’s peers. That’s a situation where authors become uncomfortable using their mod tools. But I don’t know precisely what comment was wrong and what was wrong with it such that had it not happened the outcome would counterfactually not have obtained i.e. that you wouldn’t have found some other way to make the author uncomfortable using his mod tools (though we could probably all agree on some schelling lines).
Also I am hesitant to fully outlaw behavior that might sometimes be appropriate. Perhaps there are some situations where it’s appropriate to criticize someone on your shortform after they banned you. Or perhaps sometimes you should call someone a coward for not engaging with your criticism.
Overall I believe sometimes I will have to look at the outcome and see whether the gain in this situation was worth the cost, and directly give positive/negative feedback based on that.
Related to other things you wrote, FWIW I think you have a personality that many people would find uncomfortable interacting with a lot. In-person I regularly read you as being deeply pained and barely able to contain strongly emotional and hostile outbursts. I think just trying to ‘follow the rules’ might not succeed at making everyone feel comfortable interacting with you, even via text, if they feel a deep hostility from you to them that is struggling to contain itself with rules like “no explicit insults”, and sometimes the right choice for them will just be to not engage with you directly. So I think it is a hypothesis worth engaging with that you should work to change your personality somewhat.
To be clear I think (as Said has said) that it is worth people learning to be able to make space to engage with people like you who they find uncomfortable, because you raise many good ideas and points (and engaging with you is something I relatively happily do, and this is a way I have grown stronger relative to myself of 10 years ago), and I hope you find more success as I respect many of your contributions, but I think a great many people who have good points to contribute don’t have as much capacity as me to do this, and you will sometimes have to take some responsibility for navigating this.
If the popular kids in the cool kids’ club don’t like Goldstein and your only goal is to make sure that the popular kids feel comfortable, then clearly your optimal policy is to kick Goldstein out of the club. But if you have some other goal that you’re trying to pursue with the club that the popular kids and Goldstein both have a stake in, then I think you do have to try to evaluate whether Goldstein “did anything wrong”, rather than just checking that everyone feels comfortable. Just ensuring that everyone feels comfortable at all costs, without regard to the reasons why people feel uncomfortable or any notion that some reasons aren’t legitimate grounds for intervention, amounts to relinquishing all control to anyone who feels uncomfortable when someone else doesn’t behave exactly how they want.
Something I appreciate about the existing user ban functionality is that it is a rule-based mechanism. I have been persuaded by Achmiz and Dai’s arguments that it’s bad for our collective understanding that user bans prevent criticism, but at least it’s a procedurally “fair” kind of badness that I can tolerate, not completely arbitrary tyranny. The impartiality really helps. Do you really want to throw away that scrap of legitimacy in the name of optimizing outcomes even harder? Why?
I think just trying to ‘follow the rules’ might not succeed at making everyone feel comfortable interacting with you
But I’m not trying to make everyone feel comfortable interacting with me. I’m trying to achieve shared maps that reflect the territory.
A big part of the reason some of my recent comments in this thread appeal to an inability or justified disinclination to convincingly pretend to not be judgmental is because your boss seems to disregard with prejudice Achmiz’s denials that his comments are “intended to make people feel judged”. In response to that, I’m “biting the bullet”: saying, okay, let’s grant that a commenter is judging someone; to what lengths must they go to conceal that, in order to prevent others from predictably feeling judged, given that people aren’t idiots and can read subtext?
I think there’s something much more fundamental at stake here, which is that an intellectual forum that’s being held hostage to people’s feelings is intrinsically hampered and can’t be at the forefront of advancing the art of human rationality. If my post claims X, and a commenter says, “No, that’s wrong, actually not-X because Y”, it would be a non-sequitur for me to reply, “I’d prefer you engage with what I wrote with more curiosity and kindness.” Curiosity and kindness are just not logically relevant to the claim! (If I think the commenter has misconstrued what I wrote, I could just say that.) It needs to be possible to discuss ideas without getting tone-policed to death. Once you start playing this game of litigating feelings and feelings about other people’s feelings, there’s no end to it. The only stable Schelling point that doesn’t immediately dissolve into endless total war is to have rules and for everyone to take responsibility for their own feelings within the rules.
I don’t think this is an unrealistic superhumanly high standard. As you’ve noticed, I am myself a pretty emotional person and tend to wear my heart on my sleeve. There are definitely times as recently as, um, yesterday, when I procrastinate checking this website because I’m scared that someone will have said something that will make me upset. In that sense, I think I do have some empathy for people who say that bad comments make them less likely to use the website. It’s just that, ultimately, I think that my sensitivity and vulnerability is my problem. Censoring voices that other people are interested in hearing would be making it everyone else’s problem.
I think there’s something much more fundamental at stake here, which is that an intellectual forum that’s being held hostage to people’s feelings is intrinsically hampered and can’t be at the forefront of advancing the art of human rationality.
An intellectual forum that is not being “held hostage” to people’s feelings will instead be overrun by hostile actors who either are in it just to hurt people’s feelings, or who want to win through hurting people’s feelings.
It’s just that, ultimately, I think that my sensitivity and vulnerability is my problem.
Some sensitivity is your problem. Some sensitivity is the “problem” of being human and not reacting like Spock. It is unreasonable to treat all sensitivity as being the problem of the sensitive person.
Mm, I think sometimes I’d rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
This made my blood go cold, despite thinking it would be good if Said left LessWrong.
My first thoughts when I read “judge on the standard of whether the outcome is good” is that this lets you cherrypick your favorite outcomes without justifying them. My second is that it knowing if something is good can be very complicated even after the fact, so predicting it ahead of time is challenging even if you are perfectly neutral.
I think it’s good LessWrong(’s admins) allows authors to moderate their own posts (and I’ve used that to ban Said from my own posts). I think it’s good LessWrong mostly doesn’t allow explicit insults (and wish this was applied more strongly). I think it’s good LessWrong evaluates commenting patterns, not just individual comments. But “nothing that makes authors feel bad about bans” is way too far.
It’s extremely common for all judicial systems to rely on outcome assessments instead of process assessments! In many domains this is obviously the right standard! It is very common to create environments where someone can sue for damages and not just have the judgement be dependent on negligence (and both thresholds are indeed commonly relevant for almost any civil case).
Like sure, it comes with various issues, but it seems obviously wrong to me to request that no part of the LessWrong moderation process relies on outcome assessments.
Okay. But I nonetheless believe it’s necessary that we have to judge communication sometimes by outcomes rather than by process.
Like, as a lower stakes examples, sometimes you try to teasingly make a joke at your friend’s expense, but they just find it mean, and you take responsibility for that and apologize. Just because you thought you were behaving right and communicating well doesn’t mean you were, and sometimes you accept feedback from others that says you misjudged a situation. I don’t have all the rules written down such that if you follow them your friend will read your comments as intended, sometimes I just have to check.
Similarly sometimes you try to criticize an author, but they take it as implying you’ll push back whenever they enforce boundaries on LessWrong, and then you apologize and clarify that you do respect them enforcing boundaries in general but stand by the local criticism. (Or you don’t and then site-mods step in.) I don’t have all the rules written down such that if you follow them the author will read your comments as intended, sometimes I just have to check.
Obviously mod powers can be abused, and having to determine on a case by case basis is a power that can be abused. Obviously it involves judgment calls. I did not disclaim this, I’m happy for anyone to point it out, perhaps nobody has mentioned it so far in this thread so it’s worth making sure the consideration is mentioned. And yeah, if you’re asking, I don’t endorse “nothing that makes authors feel bad about bans”, and there are definitely situations where I think it would be appropriate for us to reverse someone’s bans (e.g. if someone banned all of the top 20 authors in the LW review, I would probably think this is just not workable on LW and reverse that).
Sure, but “is my friend upset” is very different than “is the sum total of all the positive and negative effects of this, from first order until infinite order, positive”
In-person I regularly read you as being deeply pained and barely able to contain strongly emotional and hostile outbursts.
with “Disagree”.
I have no idea how you could remotely know whether this is true, as I think you have never interacted with either Ben or Zack in person!
Also, it’s really extremely obviously true. Indeed, Zack frequently has the corresponding emotional and hostile outbursts, so it’s really extremely evident they are barely contained during a lot of it (since sometimes they do not end up contained, and then Zack apologizes for containing them and explains that this is difficult for him).
You can write lots of judgmental comments criticizing an author’s posts, and then they can ban you from their comments because they find engaging with you to be exhausting, and then you can make a shortform where you and your friends call them a coward, and then they stop using the mod tools (and other authors do too) out of a fear that using the mod tools will result in a group of people getting together to bully and call them names in front of the author’s peers. That’s a situation where authors become uncomfortable using their mod tools.
Here’s what confuses me about this stance: do an author’s posts on Less Wrong (especially non-frontpage posts) constitute “the author’s private space”, or do they constitute “public space”?
If the former, then the idea that things that Alice writes about Bob on her shortform (or in non-frontpage posts) can constitute “bullying”, or are taking place “in front of” third parties (who aren’t making the deliberate choice to go to Alice’s private space), is nonsense.
If the latter, then the idea that authors should have the right to moderate discussions that are happening in a public space is clearly inappropriate.
I understood the LW mods’ position to be the former—that an author’s posts are their own private space, within the LW ecosystem (which is why it makes sense to let them set their own separate moderation policy there). But then I can’t make any sense of this notion of “bullying”, as applied to comments written on an author’s shortform (or non-frontpage posts).
It seems to me that these two ideas are incompatible.
What I find hard to understand is why the mod team seems to think it’s good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced.
No judicial system in the world has ever arrived at the ability to have “neutrally enforced rules”, at least the way I interpret you to mean this. Case law is the standard in almost every legal tradition, and the US legal system relies heavily on things like “jury of your peers” type stuff to make judgements.
Intent frequently matters in legal decision. Cognitive state of mind matters for legal decisions. Judges go through years of training and are part of a long lineage of people who have built up various heuristics and principles about how to judge cases. Individual courts have their own culture and track record.
And that is for the US legal system, which is absolutely not capable of operating remotely to the kind of standard that allows people to curate social spaces or deal with tricky kinds of social rulings. No company could make cultural or hiring or business decisions based on the standard of the US legal system. Neither could any internet forum.
There is absolutely no chance we will ever be able to encodify LessWrong rules of conduct into a set of specific rules that can be neutrally judged by a third party. Zero chance. Give up. If that is something you need here, leave now. Feel free to try to build it for yourself.
I could see it being confusing because sometimes an author like Gordon is moderating you, and sometimes a site-mod like Habryka is moderating you, but they are using different standards, and the LW-mods are not typically endorsing the author standards as our own.
It’s not just confusing sometimes, it’s confusing basically all the time. It’s confusing even for me, even though I’ve spent all these years on Less Wrong, and have been involved in all of these discussions, and have worked on GreaterWrong, and have spent time thinking about moderation policies, etc., etc. For someone who is even a bit less “very on LW”[1]—it’s basically incomprehensible.
I mean, consider: whenever I comment on anything anywhere, on this website, I have to not only keep in mind the rules of LW (which I don’t actually know, because I can’t remember in what obscure, linked-from-nowhere-easily-findable, long, hard-to-parse post those rules are contained), and the norms of LW (which I understand only very vaguely, because they remain somewhere between “poorly explained” and “totally unexplained”), but also, in addition to those things, I have to keep in mind whose post I am commenting under, and somehow figure out from that not only what their stated “moderation policy” is (scare quotes because usually it’s not really a specification of a policy, it’s just sort of a vague allusion at a broad class of approaches to moderation policy), but also what their actual preferences are, and how they enforce those things.
(I mean, take this recent post. The “moderation policy” a.k.a. “commenting guidelines” are: “Reign of Terror—I delete anything I judge to be counterproductive”. What is that? That’s not anything. What is Nate going to judge to be “counterproductive”? I have no idea. How will this “policy” be applied? I have no idea. Does anyone besides Nate himself know how he’s going to moderate the comments on his posts? Probably not. Does Nate himself even know? Well, maybe he does, I don’t know the guy; but a priori, there’s a good chance that he doesn’t know. The only way to proceed here is to just assume that he’s going to be reasonable… but it is incredibly demoralizing to invest effort into writing some comments, only for them to be summarily deleted, on the basis of arbitrary rules you weren’t told of beforehand, or “norms” that are totally up to arbitrary interpretation, etc. The result of an environment like that is that people will treat commenting here as strictly a low-effort activity. Why bother to put time and thought into your comments, if “whoops, someone’s opaque whim dictates that your comments are now gone” is a strong possibility?)
The whole thing sort of works most of the time because most people on LW don’t take this “set your own moderation policy” stuff too seriously, and basically (both when posting and when commenting) treat the site as if the rules were something like what you’d find on a lightly moderated “nerdy” mailing list or classic-style discussion forum.
But that just results in the same sorts of “selective enforcement” situations as you get in any real-world legal regime that criminalizes almost everything and enforces almost nothing.
Yes, of course. I both remember and agree wholeheartedly. (And @habryka’s reply in a sibling comment seems to me to be almost completely non-responsive to this point.)
I think there is something to this, though I think you should not model status in this context as purely one dimensional.
Like a culture of mutual dignity where you maintain some basic level of mutual respect about whether other people deserve to live, or deserve to suffer, seems achievable and my guess is strongly correlated with more reasonable criticism being made.
I think parsing this through the lens of status is reasonably fruitful, and within that lens, as I discussed in other sub threads, the problem is that many bad comments try to make some things low status that I am trying to cultivate on the site, while also trying to avoid accountability and clarity over whether those implications are actually meaningfully shared by the site and its administrators (and no, voting does not magically solve this problem).
The status lens doesn’t super shine light on the passive vs. active aggression distinction we discussed. And again as I said it’s too one dimensional in that people don’t view ideas on LessWrong as having a strict linear status hierarchy. Indeed ideas have lots of gears and criticism does not primarily consist of lowering something’s status, that seems like it gets rid of basically all the real things about criticism.
many bad comments try to make some things low status that I am trying to cultivate on the site
I’m not sure what things you’re trying to cultivate in particular, but in general, I’m curious whether you’ve given any thought to the idea that the use of moderator power to shape culture is less robust to errors in judgement than trying to shape culture by means of just arguing for your views, for the reasons that Scott Alexander describes in “Guided by the Beauty of Our Weapons”. That is, in Alexander’s terminology, mod power is a “symmetric weapon” that works just as well whether the mods are right or wrong, whereas public arguments are an “asymmetric weapon” that’s more effective when the arguer is correct on the merits.
When I think rationalist culture is getting things wrong (whether that be an object-level belief, or which things are considered high or low status), I write posts arguing for my current views. While I do sometimes worry about whether my current views are mistaken, I don’t worry much about having a large negative impact if it turns out that my views are mistaken, because I think that the means by which I hope to alter the culture has some amount of built-in error correction: if my beliefs or status-assignment-preferences are erroneous in some way that’s not currently clear to me, others who can see the error will argue against my views in the comments, contributing to the result that the culture won’t accept my (ex hypothesi erroneous) proposed changes.
(In case this wasn’t already clear, this is not an argument against moderators ever doing anything. It’s a reason to be extra conservative about controversial and uncertain “culture-shaping” mod actions that would be very costly to get wrong, as contrasted to removing spam or uncontroversially low-value content.)
I have argued a lot for my views! My sense is they are broadly (though not universally) accepted among what I consider the relevant set of core stakeholders for LessWrong.
But beyond that, the core set of stakeholders is also pretty united behind the meta-view that in order for a place like LessWrong to work, you need the culture to be driven by someone with taste, who trusts their own judgements on matters of culture, and you should not expect that you will get consensus on most things.
My sense is there is broad buy-in that under-moderation is a much bigger issue than over-moderation. And also ‘convincing people in the comments’ doesn’t actually like… do anything. You would have to be able to convince every single person who is causing harm to the site, which of course is untenable and unrealistic. At some point, after you explained your reasons, you have to actually enforce the things that you argued for.
In the beginning, while the community is still thriving, censorship seems like a terrible and unnecessary imposition. Things are still going fine. It’s just one fool, and if we can’t tolerate just one fool, well, we must not be very tolerant. Perhaps the fool will give up and go away, without any need of censorship. And if the whole community has become just that much less fun to be a part of… mere fun doesn’t seem like a good justification for (gasp!) censorship, any more than disliking someone’s looks seems like a good reason to punch them in the nose.
(But joining a community is a strictly voluntary process, and if prospective new members don’t like your looks, they won’t join in the first place.)
And after all—who will be the censor? Who can possibly be trusted with such power?
Quite a lot of people, probably, in any well-kept garden. But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—
(for such internal politics often seem like a matter of far greater import than mere invading barbarians)
—then trying to defend the community is typically depicted as a coup attempt. Who is this one who dares appoint themselves as judge and executioner? Do they think their ownership of the server means they own the people? Own our community? Do they think that control over the source code makes them a god?
I confess, for a while I didn’t even understand why communities had such trouble defending themselves—I thought it was pure naivete. It didn’t occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power. “None of us are bigger than one another, all of us are men and can fight; I am going to get my arrows”, was the saying in one hunter-gatherer tribe whose name I forget. (Because among humans, unlike chimpanzees, weapons are an equalizer—the tribal chieftain seems to be an invention of agriculture, when people can’t just walk away any more.)
Maybe it’s because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities. Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam). Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws. Maybe because I take it for granted that if you don’t like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).
And maybe because I, myself, have often been the one running the server. But I am consistent, usually being first in line to support moderators—even when they’re on the other side from me of the internal politics. I know what happens when an online community starts questioning its moderators. Any political enemy I have on a mailing list who’s popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator’s hat, I vocally support them—they need urging on, not restraining. People who’ve grown up in academia simply don’t realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of “free speech”.
Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving. But this is more accused than realized, so far as I can see.
In any case the light didn’t go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. While reading a comment at Less Wrong, in fact, though I don’t recall which one.
But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay. Being too humble, doubting themselves an order of magnitude more than I would have doubted them. It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.
This about the Internet: Anyone can walk in. And anyone can walk out. And so an online community must stay fun to stay alive. Waiting until the last resort of absolute, blatent, undeniable egregiousness—waiting as long as a police officer would wait to open fire—indulging your conscience and the virtues you learned in walled fortresses, waiting until you can be certain you are in the right, and fear no questioning looks—is waiting far too late.
I have seen rationalist communities die because they trusted their moderators too little.[1]
I have very extensively argued for my moderation principles, and also LessWrong has very extensively argued about the basic premise of Well-Kept Gardens Die By Pacifism. Of course, not everyone agrees, but both of these seem to me to I think create a pretty good asymmetric-weapons case for the things that I am de-facto doing as a head moderator.
The post also ends with a call for people to downvote more, which I also mostly agree with, but also it just seems quite clear that de-facto a voting system is not sufficient to avoid these dynamics.
the core set of stakeholders is pretty united behind the meta-view that in order for a place like LessWrong to work, you need the ability to have a culture be driven by someone with taste
Sorry, I don’t understand how this is consistent with the Public Archipelago doctrine, which I thought was motivated by different people wanting to have different kinds of discussions? I don’t think healthy cultures are driven by a dictator; I think cultures emerge from the interaction of their diverse members. We don’t all have to have exactly the same taste in order to share a website.
I maintain hope that your taste is compatible with me and my friends and collaborators continuing to be able to use the website under the same rules as everyone else, as we have been doing for fifteen years. I have dedicated much of my adult life to the project of human rationality. (I was at the first Overcoming Bias meetup in February 2008.) If Less Wrong is publicly understood as the single conversational locus for people interested in the project of rationality, but its culture weren’t compatible with me and my friends and collaborators doing the intellectual work we’ve spent our lives doing here, that would be huge problem for my life’s work. I’ve made a lot of life decisions and investments of effort on the assumption that this is my well-kept garden, too; that I am not a “weed.” I trust you understand the seriousness of my position.
And also ‘convincing people in the comments’ doesn’t actually like … do anything.
Well, it depends on what cultural problem you’re trying to solve, right? If the problem you’re worried about is “Authors have to deal with unwanted comments, and the existing site functionality of user-level bans isn’t quite solving that problem yet, either because people don’t know about the feature or are uncomfortable using it”, you could publicize the feature more and encourage people to use it.
That wouldn’t involve any changes to site policy; it would just be a matter of someone using speech to tell people about already-existing site functionality and thus to organically change the local culture.
It wouldn’t even need to be a moderator: I thought about unilaterally making my own “PSA: You Can Ban Users From Commenting on Your Posts” post, but decided against it, because the post I could honestly write in my own voice wouldn’t be optimal for addressing the problems that I think you perceive.
That is, speaking for myself in my own voice, I have been persuaded by Wei Dai’s arguments that user bans aren’t good because they censor criticism, which results in less accurate shared maps; I think people who use the feature (especially liberally) could be said to be making a rationality mistake. But crucially, that’s just my opinion, my own belief. I’m capable of sharing a website with other people who don’t believe the same things as me. I hope those people feel the same way about me.
My understanding is that you don’t think that popularizing existing site functionality solves the cultural problems you perceive, because you’re worried about users “heap[ing] [...] scorn and snark and social punishment” on e.g. their own shortform. I maintain hope that this class of concern can be addressed somehow, perhaps by appropriately chosen clear rules about what sorts of speech are allowed on the topics of particular user bans or the user ban feature itself.
I think clear rules are important in an Archipelago-type approach for defining how the different islands in the archipelago interact. Attitudes towards things like snark is one of the key dimensions along which I’d expect the islands in an archipelago to vary.
I fear you might find this frustrating, but I’m afraid I still don’t have a good grasp of your conceptualization of what constitutes social punishment. I get the impression that in many cases, what me and my friends and collaborators would consider “sharing one’s honest opinion when it happens to be contextually relevant (including negative opinions, including opinions about people)”, you would consider social punishment. To be clear, it’s not that I’m pretending to be so socially retarded that I literally don’t understand the concept that sharing negative opinions is often intended as a social attack. (I think for many extreme cases, the two of us would agree on characterizing some speech as unambiguously an attack.)
Rather, the concern is that a policy of forbidding speech that could be construed as social punishment would have a chilling effect on speech that is legitimate and necessary towards the site’s mission (particularly if it’s not clear to users how moderators are drawing the category boundary of “social punishment”). I think you can see why this is a serious concern: for example, it would be bad if you were required to pretend that people’s praise of the Trump administration’s AI Action plan was in good faith if you don’t actually think that (because bad faith accusations can be construed as social punishment).
I just want to preserve the status quo where me and my friends and collaborators can keep using the same website we’ve been using for fifteen years under the same terms as everyone else. I think the status quo is fine. You want to get back to work. (Your real work, not whatever this is.) I want to get back to work. I think we can choose to get back to work.
We don’t all have to have exactly the same taste in order to share a website.
Please don’t strawman me. I said no such thing, or anything that implies such things. Of course not everyone needs to have exactly the same taste to share a website. What I said is that the site needs taste to be properly moderated, which of course does not imply everyone on it needs to share that exact taste. You occupy spaces moderated by people with different tastes from you and the other people within it all the time.
I maintain hope that your taste is compatible with me and my friends and collaborators continuing to be able to use the website under the same rules as everyone else, as we have been doing for fifteen years. I have dedicated much of my adult life to the project of human rationality. (I was at the first Overcoming Bias meetup in February 2008.) If Less Wrong is publicly understood as the single conversational locus for people interested in the project of rationality, but its culture weren’t compatible with me and my friends and collaborators doing the intellectual work we’ve spent our lives doing here, that would be huge problem for my life’s work. I’ve made a lot of life decisions and investments of effort on the assumption that this is my well-kept garden, too; that I am not a “weed.” I trust you understand the seriousness of my position.
Yep, moderation sucks, competing access needs are real, and not everyone can share the same space, even within a broader archipelago (especially if one is determined to tear down that very archipelago). I do think you probably won’t get what you desire. I am genuinely sorry for this. I wish you good luck.[1]
Rather, the concern is that a policy of forbidding speech that could be construed as social punishment would have a chilling effect on speech that is legitimate and necessary towards the site’s mission (particularly if it’s not clear to users how moderators are drawing the category boundary of “social punishment”).
Look, various commenters on LW including Said have caused much much stronger chilling effects than any moderation policy we have ever created, or will ever create. It is not hard to drive people out of a social space. You just have to be persistent and obnoxious and rules-lawyer every attempt at policing you. It really works with almost perfect reliability.
forbidding speech that could be construed as social punishment
And of course, nobody at any point was arguing (and indeed I was careful to repeatedly clarify) that all speech that could be construed as social punishment is to be forbidden. Many people will try to socially punish other people. The thing that one needs to reign in to create any kind of functional culture is social punishments of the virtues and values that are good and should be supported and are the lifeblood of the site by my lights.
The absence of moderation does not create some special magical place in which speech can flow freely and truth can be seen clearly. You are welcome to go and share your opinions on 4chan or Facebook or Twitter or any other unmoderated place on the internet if you think that is how this works. You could even start posting on DataSecretLox if you are looking for something with more similar demographics as this place, and a moderation philosophy more akin to your own. The internet is full of places with no censorship, with nothing that should stand in the way of the truth by your lights, and you are free to contribute there.
My models of online platforms say that if you want a place with good discussion the first priority is to optimize its signal-to-noise ratio, and make it be a place that sets the right social incentives. It is not anywhere close to the top priority to worry about every perspective you might be excluding when you are moderating. You are always excluding 99% of all positions. The question is whether you are making any kind of functional discussion space happen at all. The key to doing that is not absence of moderation, it’s presence of functional norms that produce a functional culture, which requires both leading by example and selection and pruning.
I also more broadly have little interest in continuing this thread, so don’t expect further comments from me. Good luck. I expect I’ll write more some other time.
The thing that one needs to reign in to create any kind of functional culture is social punishments of the virtues and values that are good and should be supported and are the lifeblood of the site by my lights.
Well, I agree with all of that except the last three words. Except that it seems to me that the things that you’d need to reign in is the social (and administrative) punishment that you are doing, not anything else.
I’ve been reviewing older discussions lately. I’ve come to the conclusion that the most disruptive effects by far, among all discussions that I’ve been involved with, were created directly and exclusively by the LW moderators, and that if the mods had simply done absolutely nothing at all, most of those disruptions just wouldn’t have happened.
The only reason—the only reason!—why a simple question ended up leading to a three-digit-comment-count “meta” discussion about “moderation norms” and so on, was because you started that discussion. You, personally. If you had just done literally nothing at all, it would have been completely fine. A simple question would’ve been asked and then answered. Some productive follow-up discussion would’ve taken place. And that’s all.
Many such cases.
The absence of moderation does not create some special magical place in which speech can flow freely and truth can be seen clearly.
It’s a good thing, then, that nobody in this discussion has called for the “absence of moderation”…
My models of online platforms say that if you want a place with good discussion the first priority is to optimize its signal-to-noise ratio, and make it be a place that sets the right social incentives.
Thanks Said. As you know, I have little interest in this discussion with you, as we have litigated it many times.
Please don’t respond further to my comments. I am still thinking about this, but I will likely issue you a proper ban in the next few days. You will probably have an opportunity to say some final words if you desire.
The only reason—the only reason!—why a simple question ended up leading to a three-digit-comment-count “meta” discussion about “moderation norms” and so on, was because you started that discussion. You, personally. If you had just done literally nothing at all, it would have been completely fine. A simple question would’ve been asked and then answered. Some productive follow-up discussion would’ve taken place. And that’s all.
Look, this just feels like a kind of crazy catch-22. I weak-downvoted a comment, and answered a question you asked about why someone would downvote your comment. I was not responsible for anything but a small fraction of the relevant votes, nor do I consider any blame to have fallen upon me when honestly explaining my case for a weak-downvote. I did not start anything. You asked a question, I answered it, trying to be helpful in understanding where the votes came from.
It really is extremely predictable that if you ask a question about why a thing was downvoted, that you will get a meta conversation about what is appropriate on the site and what is not.
But again, please, let this rest. Find some other place to be. I am very likely the only moderator for this site that you are going to get, and as you seem to think my moderation is cause for much of your bad experiences, there is little hope in that changing for you. You are not going to change my mind in the 701st hour of comment thread engagement, if you didn’t succeed in the first 700.
Alright—apologies for the long delay, but this response meant I had to reread the Scaling Hypothesis post, and I had some motivation/willpower issues in the last week. But I reread it now.
I agree that the post is deliberately offensive at parts. E.g.:
But I think they lack a vision. As far as I can tell: they do not have any such thing, because Google Brain & DeepMind do not believe in the scaling hypothesis the way that Sutskever, Amodei and others at OA do. Just read through machine learning Twitter to see the disdain for the scaling hypothesis. (A quarter year on from GPT-3 and counting, can you name a single dense model as large as the 17b Turing-NLG—never mind larger than GPT-3?)
Google Brain is entirely too practical and short-term focused to dabble in such esoteric & expensive speculation, although Quoc V. Le’s group occasionally surprises you.
or (emphasis added)
OA, lacking anything like DM’s long-term funding from Google or its enormous headcount, is making a startup-like bet that they know an important truth which is a secret: “the scaling hypothesis is true!” So, simple DRL algorithms like PPO on top of large simple architectures like RNNs or Transformers can emerge, exploiting the blessings of scale, and meta-learn their way to powerful capabilities, enabling further funding for still more compute & scaling, in a virtuous cycle. [...]
and probably the most offensive is the ending (wont quote to not clutter the reply, but it’s in Critiquing the Critics, especially from “What should we think about the experts?” onward). You’re essentially accusing all the skeptics of falling victim to a bundle of biases/signaling incentives, rather than disagreeing with you for rational reasons. So you were right, this is deliberately offensive.
But I think the answer to the question—well actually let’s clarify what we’re debating, that might avoid miscommunication. You said this in your initial reply:
I can definitely say on my own part that nothing of major value I have done as a writer online—whether it was popularizing Bitcoin or darknet markets or the embryo selection analysis or writing ‘The Scaling Hypothesis’—would have been done if I had cared too much about “vibes” or how it made the reader feel. (Many of the things I have written definitely did make a lot of readers feel bad. And they should have. There is something wrong with you if you can read, say, ‘Scaling Hypothesis’ and not feel bad. I myself regularly feel bad about it! But that’s not a bad thing.) Even my Wikipedia editing earned me doxes and death threats.
So in a nutshell, I think we’re debating something like “will what I advocate mean you’ll be less effective as a writer” or more narrowly “will what I’m advocating for mean you couldn’t have written really valuable past pieces like the Scaling Hypothesis”. To me it still seems like the answer to both is a clear no.
The main thing is, you’re treating my position as if it’s just “always be nice”, which isn’t correct. I’m very utilitarian (about commenting and in general) (one of my main insights from the conversation with Zack is that this is a genuine difference). I’ve argued repeatedly that Said’s comment is ineffective, basically because of what Scott said in How Not to Lose an Argument. It was obviously ineffective at persuading Gordon. Now Said argued that persuading the author isn’t the point, which I can sort of grant, but I think it will be similarly ineffective for anyone sympathetic to religion for the same reasons. So it’s not that I terminally value being nice,[1] it’s that being nice is generally instrumentally useful, and would have been useful in Said’s case. But that doesn’t mean it’s necessarily always useful.
I want to call attention my rephrasing of Said’s post. I still claim that this post would have been much more effective in criticizing Gordon’s post. Gordon would have reacted in more constructive way, and again, I think everyone else who sympathizes with religion is essentially in the same position. This seems to me like a really important point.
So to clarify, I would not have objected to the Scaling Hypothesis post despite some rudeness. The rudeness has a purpose (the bolded sentence is the one that I remembered most from reading it all the way back, which is evidence for your claim that “those were some of the most effective parts”). And the context is also importantly different; you’re not directly replying to a skeptic; the post was likely to be read by lots of people who are undecided. And the fact that it was a super high effort post also matters because ‘how much effort does the other person put into this conversation’ is always one of the important parameters for vibes.
I also wanna point out that your response was contradictory in an important way. (This isn’t meant as a gotcha, I think it capture the difference between “always be nice” and “maximize vibes for impact under the constraint of being honest and not misleading”.) Because you said that you wouldn’t have been successful if you worried about vibes, but also that you made the Scaling Hypothesis post deliberately offensive, which means you did care about vibes, you just didn’t optimize them to be nice in this case.
Idk if this is worth adding, but two days ago I remembered something you wrote that I had mentally tagged as “very rude”, and where following my principles would mean you’re “not allowed” to write that. (So if you think that was important to write in this way, then we have a genuine disagreement.) That was your response to now-anonymous on your Clippy post, here. Here, my take (though I didn’t reread, this is mostly from memory) is something like
the critique didn’t make a lot of sense because it boiled down to “you’re asserting that people would do xyz, but xyz is stupid”, which is a nonseqitor (“people do xyz” and “xyz is stupid” can both be true)
your response was needlessly aggressive and you “lost” the argument in the sense that you failed the persuade the person who complained
it was absolutely possible to write a better reply here; you could have just made the above point (i.e., “it being stupid doesn’t mean it’s unrealistic”) in a friendly tone and the result would probably been that the commenter realizes their mistake; the same is achieved with fewer words and it arguably makes you look better. I don’t see the downside.
Strictly speaking I do terminally value being nice a little bit because I terminally value people feeling good/bad, but I think the ‘improve everyone’s models about the world’ consideration dominates the calculation.
I was actually already thinking about just people on LessWrong when I wrote that. I think it’s almost everyone on LessWrong.
There’s no way this is true.
What I can say is, if you are trying to convince me
Not really, no. As you say, you’ve made your position clear. I’m not sure what I could say to convince you otherwise, and that’s not really my goal, anyhow. As far as I’m concerned, what I’m saying is extremely obvious. For example, you write:
I think a community will achieve much better outcomes if being bothered by the example message is considered normal and acceptable, and writing the example message is considered bad.
And this is obviously, empirically false. The most intellectually productive environments/organizations in the history of the world have been those where you can say stuff like the example comment without concern for censure, and where it’s assumed that nobody will be bothered by it. (Again, see the Philip Greenspun MIT anecdote I cited for one example; but there are many others.)
(I don’t even believe that you’re not bothered by this kind of thing;[1] I think you are and it does change your conduct as well, although I totally believe that you believe you’re not bothered.)
I think that you are typical-minding very strongly. It seems as if you’re not capable of imagining that someone can fail to perceive the sort of thing we’re discussing as being some sort of social attack. This is causing you to both totally misunderstand my own perspective, and to have a mistaken belief about how “almost everyone on LessWrong” thinks. (I don’t know if you just haven’t spent much time around people of a certain mental make-up, or what.)
This is something I usually wouldn’t say out of politeness/vibe protection, but since you don’t think I should be doing that, saying it kind of feels more respectful, idk.
I appreciate it! I think this is actually an excellent example of how “vibe protection” is bad, because it prevents us from discussing this sort of thing—which is obviously bad, because it’s central to the disagreement!
I think that you are typical-minding very strongly. It seems as if you’re not capable of imagining that someone can fail to perceive the sort of thing we’re discussing as being some sort of social attack. This is causing you to both totally misunderstand my own perspective, and to have a mistaken belief about how “almost everyone on LessWrong” thinks. (I don’t know if you just haven’t spent much time around people of a certain mental make-up, or what.)
I think I’m capable of imagining that someone can fail to perceive this sort of thing. I know this because I did imagine this—when you told me you don’t care, and every comment I had read from you was in the same style, I (perhaps naively) just assumed that you’re telling the truth.
But then you wrote this reply to me, which was significantly friendlier than any other post you’ve written to me. This came directly after I said this
BTW I think asking me what I mean by vibes is completely reasonable. Someone strong-downvoted your comment I guess because it sounds pedantic but I don’t agree with this, I don’t think this is a case where the concept so obvious that you shouldn’t ask for a definition. (I strong-upvoted back to 0.)
And then also your latest comment (the one I’m replying to) is the least friendly, except for the final paragraph, which is friendly again. So, I when I said did something unusually nice,[1] you were being nice in response. When I was the most rude, in my previous comment, you were the most rude back. Your other comments in this thread that stand out as more nice are those in response to Ben Pace rather than habryka.
… so in summary, you’re obviously just navigating social vibes like a normal person. I was willing to take your words that you’re immune, but not if you’re demonstrating otherwise! (A fun heuristic is just to look at {number of !}/{post length}. There are exceptions, but most of the time, !s soften the vibe.)
clarifying that this was not an intended trap; I just genuinely don’t get why the particular comment asking me to define vibes should get downvoted. (Although I did deliberately not explain why I said I don’t believe you; I wanted to see if you’d ask or just jump to a conlucusion.)
Frankly, I think that you’re mistaking noise for signal here. There’s no “niceness” or “rudeness” going on in these comments, there are just various straightforwardly appropriate responses to various statements / claims / comments / etc.
But that’s just the thing: you shouldn’t be thinking of object-level discussions on LW as “social situations” which you need to “navigate”. If that’s how you’re approaching things, then of course you’re going to have all of these reactions—and you’ve doomed the whole enterprise right from the start! You’re operating on too high a simulacrum level. No useful intellectual work will get done that way.
There’s just no need for this sort of “higher simulacrum level” stuff. Is my comment “nice”? Is it “rude”? No, it’s just saying what I think is true and relevant. If you stop trying to detect “niceness” and “rudeness” in my comments, it’ll be simpler for everyone involved. That’s the benefit of abjuring “vibes”: we can get down to the important stuff.
… on the other hand, maybe everything I just said in the above paragraph is totally wrong, and you should instead try much harder to detect “vibes”:
I just genuinely don’t get why the particular comment asking me to define vibes should get downvoted
Do you mean this literally? Because that’s intensely ironic, if so! You see, it’s extremely obvious to me why that comment got downvoted. If I get it, and you don’t, then… what does that say about our respective ability to understand “vibes”, to “navigate social situations”, and generally to understand what’s going on in discussions like this? (No, really—what does it say about those things? That’s not a rhetorical question, and I absolutely cannot predict what your response is going to be.)
Do you mean this literally? Because that’s intensely ironic, if so! You see, it’s extremely obvious to me why that comment got downvoted.
I didn’t say I don’t get why it happened; I said, I don’t get why it should happen, meaning I don’t see a reason I agree with, I think the comment is fine. (And if it matters, I never thought about what I think would have happened or why with this comment, so I neither made a true nor a false prediction.)
Separate response because this doesn’t matter for the moderation question (my argument here applies to personal style only) and also because I suspect this will be a much more unpopular take than the other one, so people may disagree-vote in a more targeted way.
All this to say that I’m averse to overtly optimizing the vibes to be more persuasive, because I don’t want to persuade people by means of the vibes. That doesn’t count!
The question of, should you optimize personal writing for persuasion-via-vibes is one I’ve thought about a lot, and I think the correct answer is “yes”. Here’s four reasons for why.
One, you can adhere to very high epistemic standard while doing this. You can still only argue for something if you believe it to be true and know why you believe it, and always give the actual reasons for why you believe it. (The State Science Institute article from your post responding to Eliezer’s meta-honesty notably fails this standard.) I’m phrasing this in a careful/weird way because I guess you are in some sense including ‘components’ into your writing that will be persuasive for reasons-that-are-not-the-reasons-why-you-believe-the-thing-you’re-arguing-for, so you’re not only giving those reasons, but you can still always include those reasons. I mean the truth is that when I write, I don’t spend much time explicitly checking whether I obey any specific rules, I just think I have a good intuitive sense of how epistemically pure I’m being. When I said in my comment 4 days ago that optimizing vibes doesn’t require you to lie “at all”, this feeling was the thing upstream of that phrasing. Like, I can write a post such that I have a good feeling about both the post’s vibes and its epistemic purity.
In practice, I just suspect that the result won’t look anything you’d actually take issue with. E.g., my timelines post was like this.. (And fwiw no one has ever accused me of being manipulative in a high-effort post, iirc.)
Two, I don’t think there is a bright line between persuasive vibes and not having anti-persuasive vibes. Say you start off having a writing style that’s actively off-putting and hence anti-persuasive. I think you’re “allowed” to clean that up? But then when do you have to stop?
Three, it’s not practically feasible no not optimize vibes. It is feasible to not deliberately optimize vibes, but if you care about your writing, you’re going to improve it, and that will make the vibes better. Scott Alexander is obviously persuasive in part because he’s a good writer. (I think that’s obvious, anyway.) I think your writing specifically actually has a very distinct vibe, and I think that significantly affects your persuasiveness, and you could certainly do a lot worse as far as the net effect goes, so… yeah, I think it is in fact true to say that you have optimized your vibes to be more persuasive, just not intentionally.
And four, well, if there’s a correlation between having good ideas and having self-imposed norms on how to communicate, which I think there is, then refusing to optimize vibes is shooting yourself/your own team/the propagation of good ideas in the foot. You could easily come up with a toy model where where there are two teams, one optimizes vibes and one doesn’t, and the one who does gradually wins out.
I think right now the situation is basically that ~no one has a good model of how vibes work so people just develop their own vibes and some of them happen to be good for persuasion and some don’t. I’d probably estimate the net effect of this much higher than most people; as I indicated in my comment 4 days ago, I think the idea that most people on LW are not influenced by vibes is just not true at all. (Though it is higher outside LW, which, I mean, that also matters.) Which is kind of a shitty situation.
Like I said, I think this doesn’t have a bearing on the moderation question, but I do think it’s actually a really important point that many people will have to grapple with at some point. Ironically I think the idea of optimizing vibes for persuasion has very ugly vibes (like a yuck factor to it), which I definitely get.
I upvoted this comment but strongly disagree-voted. (This is unusual enough that I mention it.) The following are some scattered notes, not to be taken as a comprehensive reply.
Firstly, I think that your thinking about this subject could stand to be informed a lot more by the “selective” vs. “corrective” vs. “structural” trichotomy.[1] In particular, you essentially ignore selective approaches; but I think that they are of critical importance, and render a large swath of what you say here largely moot.
Second… I must’ve linked to this comment thread by Vladimir_M several dozen times by now, but I think it still hasn’t really “reached conceptual fixation”, so I’m linking it again. I highly recommend reading it in detail (Vladimir_M was one of LW’s most capable and insightful commenters, in the entirety of the site’s history), but the gist is that while a person could claim to be experiencing some negative emotional effect of some other person’s words or actions, could even actually, genuinely be experiencing that negative emotional effect, nevertheless the actual cause of that emotional effect is an unconscious strategic calculation that is based entirely on status dynamics. Change the status dynamics, and—like magic!—the experienced emotional effects will change, or even vanish entirely. This means that taking the emotional effects (which, I repeat, may be entirely “real”, in the sense that they are not consciously falsified) as “brute facts” is a huge mistake, both descriptively and strategically: it simply gets the causation totally wrong, and creates hideously bad incentives.
And that, in turn, means that all of the reasons you give for “coddling”, for attending to “vibes”, etc., are based on a radically mistaken model of interpersonal dynamics; and that rather than improving anything, doing what you suggest is precisely the worst thing that we could be doing. To the extent that we’re doing it already, it’s the source of most of our problems; to the extent that we could be doing it even more, it’s going to cause even worse problems than we already see.
As for Said specifically, I have no memories of being upset about his comments on my posts (it’s possible it happened and I forgot), but I have many (non-specific) memories of seeing his comments on different posts and being like, “ohh no this is not going to be helpful :(” even though iirc I agree with him more often than not.
This, for example, seems like a clear case of perception of status dynamics.
I’m very confident that the net impact would be a lot more positive if [Said] articulated identical points in a different style. … As I said, I think there’s just not much of a tradeoff here. I mean there’s a tradeoff for the commenter since it takes effort to be nice. But there’s not much of a tradeoff for the product (the comment). Maybe it’ll be longer, but, I mean.
It’s not possible to “articulate identical points in a different style”.
If it were possible and if I did it, it would have exactly the same effect.
The trade-off is huge for both the commenter and (even more importantly!) for readers.
Writing more words to express the same idea is bad.
Again, this has all been discussed ad nauseam, and all of the points you cite have been quite thoroughly rebutted, over and over and over. (I don’t mean this as a rebuff to you—there’s no reason you should be expected to have followed these discussions or even to know about them. I am only saying that none of these points are new, and there is absolutely nothing in what you say here that I—and, I expect, Zack as well—haven’t already considered at length.)
In case it wasn’t clear, I think all the caring about vibes is entirely justified as an instrumental reason. I do think it’s also terminally good if people feel better, but I think everything I said holds if we assign that 0 weight.
And to summarize my response: not only is “caring about vibes” instrumentally very bad, but also, the idea that “caring about vibes” makes people feel better, while “not caring about vibes” makes people feel worse, is just mistaken.
The important things in interacting on a public forum for intellectual discussion are honesty, integrity, and respect for one’s interlocutor as someone who is assumed to be capable of taking responsibility for their own mind and their own behavior. (In other words, a person-interface approach.)
(As usual, none of this is to be taken as an endorsement of vulgarity, insults, name-calling, etc.; the normal standards of basic decency toward other people, as seen in ordinary intellectual society, still apply. The MIT professor from Philip Greenspun’s story probably wasn’t going around calling his students idiots or assholes, and we shouldn’t do such things either.)
I apologize for the self-serving nature of that objection; but then, I did write that post because I find this conceptual distinction to be very often useful, and also very neglected.
(I had not encountered any of the resources you linked, but mostly (I skipped e.g. some child threads in the Vladimir_M thread) read them now, before replying.)
Firstly, I think that your thinking about this subject could stand to be informed a lot more by the “selective” vs. “corrective” vs. “structural” trichotomy.[1] In particular, you essentially ignore selective approaches; but I think that they are of critical importance, and render a large swath of what you say here largely moot.
To make sure I understand. Are you saying, “my style of commenting will cause some users to leave the site, and those will primarily be users that are a net negative for the site, so that’s a good thing?”
Assuming that is the argument, I don’t agree that this is an important factor in your favor. Insofar as the unusual property about your commenting style is vibes, it does a worse job at the selection than a nice comment with identical content would do.
(If you’re just arguing about net impact of your comments vs. the counterfactual of you not writing them at all—rather than whether they could be written differently—then I still disagree because I think the ‘driving people away’ effect will be primarily vibe-based on your case, and probably net harmful.)
the gist is that while a person could claim to be experiencing some negative emotional effect of some other person’s words or actions, could even actually, genuinely be experiencing that negative emotional effect, nevertheless the actual cause of that emotional effect is an unconscious strategic calculation that is based entirely on status dynamics. Change the status dynamics, and—like magic!—the experienced emotional effects will change, or even vanish entirely. This means that taking the emotional effects (which, I repeat, may be entirely “real”, in the sense that they are not consciously falsified) as “brute facts” is a huge mistake, both descriptively and strategically: it simply gets the causation totally wrong, and creates hideously bad incentives.
I read the comment thread before your summary, and this is definitely not what I would have said the gist of the comment thread was that. I’d have said the main point was that, if you have a culture that terminally values psychological harm minimization, this allows for game-theoretical exploits where people either pretend to be hurt or modify themselves to be actually hurt.
Response to your summary: I haven’t asserted any causation. Even if your description is true, it’s unclear how this contradicts my position. (Is it true? Most complicated question we’ve touched so far, imo, big rabbit hole, probably not worth going into. But my model agrees that status dynamics play a gigantic role.)
Response to what I thought the gist was: I agree that exploitation is a big problem. I disagree that this is enough of a reason not to optimize for vibes. I think in practice it’s less of a problem than Vladimir makes it sound, for the particular interventions I suggest (like optimizing vibes for your commenting style and considering it as a factor for moderating decisions) because (a) some people are quite good at seeing whether someone is sincere and are hard to trick, and I think this ability is crucial to be a good mod, and (b) I don’t think it sets particularly bad incentives for self-modification because you don’t actually a get a lot of power from having your feelings hurt, under the culture I’m advocating for.
But, even if it were a bigger problem—even a much bigger problem—I would still not consider it a fatal rebuttal. I view this sort of like saying that having a karma system is bad because it can be exploited. In fact it is exploited all the time, but it’s still a net positive. You don’t just give up on modeling one of the most important factors of how brains work because your system of doing so will be exploited. You optimize anyway and then try to intelligently deal with exploitation as best as you can.
Again, this has all been discussed ad nauseam, and all of the points you cite have been quite thoroughly rebutted, over and over and over.
The people in the comment threads you linked didn’t seem to be convinced, so I think a more accurate summary is, “I’ve discussed this several times before, and I think I’m right.”
If you think that this is not worth discussing again and therefore it’s not worth continuing this particular conversation, then I’m fine with that, I don’t think you have any obligation to respond to this part of the comment, or the entire comment. (I wanna point out that I wrote my initial comment to Zack, not to you—though I understand that I mentioned you, which I thought was kind of unavoidable, but I concede that it can be viewed as starting a conversation with you.)
You can probably guess this, but I’m not convinced by your arguments, and I think the first two bullet points are completely false, and the second is mostly false. (I agree with the last one, but changing vibes doesn’t make comments that much longer; my initial comment here was long for specific reasons that don’t generalize.) I used to have a commenting style much closer to yours, and now I don’t, so I know you can in fact dramatically change vibes without changing content or length all that much. It’s difficult to convince me that X isn’t possible when I’ve done X.
(When you say “I have no idea why your proposed alternative version of my comment would be ‘less social-attack-y’” then I believe you, but so what? (I can see immediately why the alternative version is less social-attack-y.) If the argument were “what you’re advocating for is unfair toward people who aren’t as good at understanding vibes”, then I’d take this very seriously, but I won’t reply to that until you’re actually making that argument.)
To make sure I understand. Are you saying, “my style of commenting will cause some users to leave the site, and those will primarily be users that are a net negative for the site, so that’s a good thing?”
No.
I am saying that if we have a forum where the attitude and approach that I recommend, then those people will be attracted to the forum who are suited to a forum like that, and those people who are not suited to it, will mostly stay away. This is a much more effective way of building a desirable forum culture than trying to have existing members alter their behavior to “optimize for vibes”.
(Of course this works in reverse, too. The current administration of LW have built the currently active forum culture not by getting people to change their behavior, but by driving away people who find their current approach to be bad, and attracting people who find their current approach to be good.)
Assuming that is the argument, I don’t agree that this is an important factor in your favor. Insofar as the unusual property about your commenting style is vibes, it does a worse job at the selection than a nice comment with identical content would do.
This is a moot point given that the assumption doesn’t hold, but I just want to note that there is no such thing as “a nice comment with identical content” (as some purportedly “not-nice” comment). If you say something differently, then you’ve said something different. Presentation cannot be separated from content.
Response to what I thought the gist was: I agree that exploitation is a big problem. I disagree that this is enough of a reason not to optimize for vibes. I think in practice it’s less of a problem than Vladimir makes it sound, for the particular interventions I suggest (like optimizing vibes for your commenting style and considering it as a factor for moderating decisions) because (a) some people are quite good at seeing whether someone is sincere and are hard to trick, and I think this ability is crucial to be a good mod, and (b) I don’t think it sets particularly bad incentives for self-modification because you don’t actually a get a lot of power from having your feelings hurt, under the culture I’m advocating for.
Yeah, you’ve definitely missed the point.
As you say, this is rather a large rabbit hole, but I’ll just note a couple of things:
some people are quite good at seeing whether someone is sincere and are hard to trick
This is a total, fundamental misunderstanding of the claim. The people who are experiencing the negative emotions in the sorts of cases that Vladimir_M is talking about are sincere! They sincerely, genuinely, un-feignedly feel bad!
It’s just that if the incentives and the status dynamics were different, those people would feel differently.
There is usually nothing conscious about it, and no “tricking” involved.
I don’t think it sets particularly bad incentives for self-modification because you don’t actually a get a lot of power from having your feelings hurt, under the culture I’m advocating for
You get all the power from that, under the culture you’re advocating for. The purported facts about who gets their feelings hurt by what is the motivating principle of the culture you’re advocating for! By your own description, this is a culture of “optimizing for vibes”!
But, even if it were a bigger problem—even a much bigger problem—I would still not consider it a fatal rebuttal. I view this sort of like saying that having a karma system is bad because it can be exploited. In fact it is exploited all the time, but it’s still a net positive. You don’t just give up on modeling one of the most important factors of how brains work because your system of doing so will be exploited. You optimize anyway and then try to intelligently deal with exploitation as best as you can.
See above. Total misunderstanding of the causation. Your model simply gets things backwards.
Again, this has all been discussed ad nauseam, and all of the points you cite have been quite thoroughly rebutted, over and over and over.
The people in the comment threads you linked didn’t seem to be convinced, so I think a more accurate summary is, “I’ve discussed this several times before, and I think I’m right.”
Sure they weren’t convinced. What, did you expect replies along the lines of “yeah you’re totally right, after reading what you just wrote there, I hereby totally reverse my view on the matter”? As I’ve written before, that would be a bad idea! It is proper that no such replies were forthcoming, even conditional on my arguments having been completely correct.
But my interlocutors in those discussions also didn’t provide anything remotely resembling coherent or credible counter-arguments, weighty contrary evidence, etc.
(In any case, why rely on others? Suppose they had been convinced—so what? I claim that the points you cite have been thoroughly rebutted. If I am wrong about that, and a hundred people agree with me, then I am still wrong. I didn’t link those comment threads because I thought that everyone agreed with me, I linked them because I consider my arguments there to have been correct. If you disagree, fine and well; but that’s whose opinion matters here, not some other people’s.)
You can probably guess this, but I’m not convinced by your arguments, and I think the first two bullet points are completely false, and the second is mostly false. (I agree with the last one, but changing vibes doesn’t make comments that much longer; my initial comment here was long for specific reasons that don’t generalize.) I used to have a commenting style much closer to yours, and now I don’t, so I know you can in fact dramatically change vibes without changing content or length all that much. It’s difficult to convince me that X isn’t possible when I’ve done X.
Well, having traded high-level overviews, nothing remains for us at this point but to examine specific examples. If you have such, I’m interested to see them. (That’s as far as the first bullet point goes, i.e. “it’s not possible to ‘articulate identical points in a different style’”.)
As to the second bullet point (“if it were possible and if I did it, it would have exactly the same effect”), I am quite certain about this because I’ve experienced it many times.
Here’s the thing: when someone (who has some stake in the situation) tells you that “it’s not what you said, it’s how you said it”, that is, with almost no exceptions ever, a deliberate attempt to get you to not say that thing at all, in any way. It is a deliberate attempt to impose costs on your ability to say that thing—and if you change the “how”, then they will simply find another thing to criticize in “how”, all the while denying that the problem is with the “what”.
(See this recent discussion for a perfect example. I say critical things directly—I get moderated for it. I don’t say such things directly, I get told that I’m being “passive-aggressive”, that what I wrote is “the same thing even though you successfully avoided saying the literal words”, that it’s “obvious” that I meant the same thing, we have a moderator outright admitting that he reads negative connotations into my comments, etc., etc. We even see a moderator claiming, absurdly, that it would be better if I were to outright call people stupid and evil! How’s that for “vibes optimization”, eh? And what’s the likelihood that “you are stupid and evil” would actually not draw moderator action?)
I’ve seen this play out many, many, many times, and not only with myself as the target. As I’ve mentioned, I do now have some experience running my own discussion forum, with many users, various moderators, various moderation approaches, etc. I have seen this happening to other people, quite often.
When someone whose interests are opposed to yours tells you that “it’s not what you said, it’s how you said it”, the appropriate assumption to make is that they’re lying. The only real question is whether they’re also lying to themselves, or only to you. (Both variants happen often enough that one should not have strong priors either way.)
(When you say “I have no idea why your proposed alternative version of my comment would be ‘less social-attack-y’” then I believe you, but so what? (I can see immediately why the alternative version is less social-attack-y.) If the argument were “what you’re advocating for is unfair toward people who aren’t as good at understanding vibes”, then I’d take this very seriously, but I won’t reply to that until you’re actually making that argument.)
I’m afraid that you are responding to a strawman of my point.
You quote the first sentence of the linked comment, but of course it was only the first sentence; in the rest of that comment, I go on to say that I do not, in fact, think that the proposed alternative version of my comment would be “less social-attack-y”, and furthermore that I think that neither version of my comment is, or would be, “social-attack-y” at all; but that nevertheless, either version would be equally perceived as being a social attack, by those who expect to benefit from so perceiving it. As I said then:
Were someone else to write exactly the words I wrote in my original comment, they would not be perceived as a social attack; whereas if I write those words—or the words you suggest, or any other words whatsoever, so long as they contained the same semantic content at their core[1]—they will be perceived as a social attack. After all, I can say something different, but I cannot mean something different.
The fact is, either you think that asking what an author means by a word, or asking for examples of some phenomenon, is a social attack, or you don’t. If I ask a question along such lines, no reassurances, no disclaimers, will serve to signal anything but “I am complying with the necessary formalities in order to ask what I wish to ask”. If you think my question is a social attack without the disclaimers, then their addition will change nothing. It is the question, after all, that constitutes the social attack, if anything does—not the form, in other words, but the content.
So this is not a matter of me “not understanding vibes”. It is a matter of you being mistaken about the role that “vibes” play in situations like this.
Note that the person that I’m talking to, in that comment thread—the one who gave the proposed alternate formulation of my comment—then writes, in response (and in partial agreement with) my above-quoted comment:
I do feel like it’s the case that your speech style is more likely to be perceived as a social attack coming from you than from someone else.
I wish it weren’t so. It’s certainly possible for “the identity and history of the speaker” to be a meaningful input into the question “was this a social attack”. But I think the direction is wrong, in this case. I think you’re the single user on LW who’s earned the most epistemic “benefit of the doubt”. That is, if literally any other user were to write in the style you write, I think it would be epistemically correct to give more probability to it being a social attack than it is for you.
And yet here we are. I don’t claim to fully understand it.
(This is also what it looks like when a person perceives status dynamics without recognizing this fact.)
Well-kept gardens do not tend to die by accepting obviously norm-violating content. They usually die by people being bad discourse participants in plausible deniable ways, just kind of worse, but not obviously and unambiguously worse, than what has come before.
This doesn’t seem compatible with reality as I understand it. I am not familiar with any example of the latter, and I have seen dozens of instances of the former. I’d appreciate examples[1] illustrating why I’m wrong.
Have you met a user called “aranjaegers” in lesswrong adjacent discord servers? (lesswrong name; @Bernd Clemens Huber ) Infamously banned from 50+ rationalist adjacent servers—either for being rude, spamming wall of text of his arguments(which he improved on eventually), being too pompous in his areas of interests etc . I think his content and focus area are mostly fine, he can be rude here and there, and the walls of texts — which he restricts to other channels if asked for. He’s barely a crackpot who’s plausible deniably not a crackpot but operating from inside view and a bit straightforward in calling what he thinks are stupid or clownish things(Although I personally think he’s rationalising). After other servers banned him the main unofficial lw-cord maintained with extremely light moderation by a single volunteer—who thought aran jaeger was good at scaring away certain types of people—got captured by aran jaegers, the discord got infamous for being a containment chamber for this person, eventually the moderator muted him for a day after an year, because he was being rude to @Kabir Kumar , so he left. (I tracked this situation for multiple months)
That was from before, I convinced him to condense his entire wall of text into 4 premises[1]—I used the analogy of it being a test for finding interested people so that he can expand later with his wall of texts— but that took around 3 hours of back and forth in lw-cord because otherwise it goes in circle. Besides I find him funny too. He still managed to get banned from multiple servers afterwards, so I think it’s just his personality and social skills. It’s possible to nudge him in certain directions, but it takes a lot of effort, his bottom line is kind of set on his cause.
I would summarise it as “evolutionary s-risk due to exponentially increasing contamination by panspermia caused by space exploration” . (He thinks the current organisations monitoring this are dysfunctional)
Other trivia includes, I told him to go attend an EA meetup in munich. He was convinced he will make an impact, but was disappointed that only few people attended, although his impression was mostly positive. (If you know about more meetups or events in munich regarding this particular cause let me know I will forward it to him)
On the lw-cord thing, Kabir kumar, posted an advert for an event he was hosting, with some slogan calling in who’s qualified. Aran basically went on to paraphrase “but I am the most important person and he banned me from his server, so he’s a liar”, the lw-cord mod got mildly annoyed at his rude behavior and muted him.
But he didn’t actually leave because he got muted—he has been muted several times from hundreds of servers—he cited the reason that some other user in discord was obnoxious to him from time to time, this same user was called “clown” by aran when they had an ethical disagreement and renamed his server alias to “The Clown King” to mock aran. He also had a change in heart with that approach, given not many people took his cause as seriously as him on discord. Nowadays he’s in his moral ambition phase, he even enrolled in a mars innovation competition for children and got graded 33⁄45, because his project didn’t innovate anything, he just posted about his ethical cause.
He has been oneshotted by inside view enough times, that he thinks he has access to infohazards which have potential to stop elon musk from launching things into space. For example, his latest public one is that under UN weapons of mass destruction treaty all interplanetary space travel are prohibited and people should be prosecuted for that. [2]
He has a masters in maths,his now deleted reddit account is u/eterniseddragon, he has sent emails regarding his cause to 100k people and organisations(easily searchable on lw-cord)[3], he even has a text file with all the mail ids etc.
Premise 1: Evolution of life on exoplanets or solar system ice moons, if it happened or were to be caused as consequence of being risked to be caused, intentionally so or by accident, would entail an—by orders of magnitudes unprecedentedly—enormous amount of eventual far-future wild animal suffering.
Premise 2: Evolution can unfold in millions of different ways.
Premise 3: The window of possible outcomes from such evolution processes (between best and worst versions of evolution) in terms of well-being or suffering is extremely large, i.e. the interval size of the total summed up suffering is gargantuan.
Premise 4: Absolutely any form of near-future introduction of microbes to planets or moons likely leads to an intolerably/unacceptably sub-optimal or negative outcome for an enormous number of animals eventually emerging from these microbes, leading to incompensatable scales of suffering.
Conclusion: Humanity at almost any costs (outside of humanity’s extinction), in about the worst case including even MAD, must prevent/avoid so-called interplanetary/interstellar microbial forward contamination for centuries, based on utilitarianism, the fundamental ethical principle, together with the rational, unbiased-compassion-requiring but non-negotiable trolley problem solution logic. Morality is scientific, not made up. We must not let this happen!
Subject: Irrefutable Proof that Interplanetary Space Probes constitute Weapons of Mass Destruction & Immediate Legal Consequences are Necessary RETURN, CEASE & DESIST ALL CURRENT AND FUTURE INTERPLANETARY MISSIONS IMMEDIATELY! THIS IS A MORAL & LEGAL ULTIMATUM AS PER ARTICLES IV & IX OF THE OUTER SPACE TREATY! https://www.sciencedirect.com/science/article/pii/S009457652500181X We demand an immediate moratorium on activities and technologies that risk microbial interplanetary forward contamination! Proof of the applicability of Article IV of the OST, based on the definition of WMDs in 1977 by the General Assembly of the UN through its resolution referred to as “A/RES/32/84-B”: https://www.unrcpd.org/wmd/ “Weapons of mass destruction (WMDs) constitute a class of weaponry with the potential to: [...]
Disseminate disease-causing organisms or toxins to harm or kill humans, animals or plants;”
Interplanetary space probes carrying microbes are a means by which those microbes can be disseminated, and microbes are a form of organisms capable of causing disease, and therefore they are disease-causing organisms, as specified in the above official declaration in the defining context of WMDs. Microbes can kick-start evolution causing animal suffering for eons.
I have touched grass with long term commitment a month ago and left discord,twitter,reddit in general except for dms and real life work, so I cannot link this here, but you may recognise me, if you have been on there on few of those discords—namely bayesianconspiracy,lesswrong discord—by my multiple usernames and accounts: dogmaticrationalist, militantrationalist, curiousinquirer, averagepcuser, averagediscorduser, RatAnon.
I even trolled a bunch of discord servers by creating a huge list of link in a discord thread on lw-cord, such that aran finds it easier to find servers(although I wasn’t that explicit with my non altruistic motives) but it was funny to watch him go and get banned. Optimised dating server banned him very quickly from what I have heard. In hindsight I apologize for any inconvenience.
More than half of the authors to this site who have posted more than 10 posts, about you, in particular. Eliezer, Scott Alexander, Jacob Falkovich, Elizabeth Van Nostrand, me, dozens of others. This is not a rare position. I would have to dig to give you an exact list, but the list is not short, and it includes large fractions of almost everyone who one might consider strong contributors to the site.
I see, thanks.
maybe you should just stay out of these conversations
Am I to take this as a statement of a moderation decision, or merely your personal opinion?
If the former—then, of course, I hear and obey. (However, for the remainder of this comment I’ll assume that it’s the latter.)
A good start, if you actually wanted to understand any of this at all, would be to stop strawmaning these people repeatedly by inserting random ellipses and question marks and random snide remarks implying the absurdity of their position.
No, I don’t think that I’m strawmanning anything. You keep saying this, and then your supposed corrections just restate what I’ve said, except with different valence. For instance:
Yes, shockingly, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker!
This seems to be just another way to describe what I wrote in the grandparent, except that your description has the connotation of something fine and reasonable and unproblematic, whereas mine obviously does not.
Of course people have such preferences! Indeed, it’s not shocking at all! People prefer not to have their bad ideas challenged, they prefer not to have obvious gaps in their reasoning pointed out, they prefer that people treat all of their utterances as deserving of nothing less than “curious”, “kind”, “collaborative” replies (rather than pointed questions, direct and un-veiled criticism, and a general “trial by fire”, “explore it by trying to break it” approach)?! Well… yeah. Duh. Humans are human. No one is shocked.
(And people will, if asked, couch these preferences in claims about “bad discourse in plausibly deniable ways”, etc.? Again: duh.)
And I must point out that for all your complaints about strawmanning, you don’t seem to hesitate in doing that very thing to me. In your reply, you write as if I hadn’t included the parenthetical, where I clarify that of course I can understand the mindset in question, if I allow certain unflattering hypotheses into the space of possibilities. You might perhaps imagine reasons why I would be initially reluctant to do this. But that’s only initially. To put it another way, I have a prior against such hypotheses, but it’s not an insuperable one.
So, yes, I understand just fine; I am quite capable of “modeling the preferences” of such people as you mention. No doubt you will reply: “no, actually you don’t, and you aren’t”. But let’s flesh out this argument, proactively. Here’s how it would go, as far as I can tell:
“You are ascribing, to individuals who are clearly honest people of high integrity and strength of character, preferences and motivations which are indicative of the opposite of those traits. Therefore, your characterization cannot be accurate.”
“One man’s modus ponens is another man’s modus tollens. The behavior of the people in question points unambiguously at their possessing the ascribed preferences and motivations (which are hardly improbable a priori, and must be actively fought against even by the best of us, not simply assumed not to be operative). Therefore, perhaps they are not quite so honest, their integrity not so high, and their strength of character not so great.”
I don’t know what exactly you’d then say in response to this—presumably you won’t be convinced, especially since you included yourself in the given set of people. And, to be clear, I don’t think that disagreeing with this argument is evidence of anything; I am certainly not saying anything like “aha, and if you reject this argument that says that you are bad, then that just proves that you are bad!”.
I outline this reasoning only to provide a countervailing model, in response your own argument that I am simply clueless, that I have some sort of inability to understand why people do things and what they want, etc. No, I certainly do have a model of what’s going on here, and it predicts precisely what we in fact observe. You can argue that my model is wrong and yours is right, but that’s what you’ll have to argue—“you lack a model that describes and predicts reality” is an argument that’s available to you in this case.
One of these days, I will probably need to write an essay, which will be titled “‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”. That day will not be today, but here’s a small down payment on that future essay:
When I read that essay, I found it pretty convincing. I didn’t see the problems, the mistakes—because I’d never been a moderator myself, and I’d never run a website.
That has changed now. And now that I have had my own experience of running an online forum (for five years now)—having decisions to make about moderation, having to deal with spam and egregious trolls and subtle trolls and just bad posters and crazy people and all sorts of things—now that I’ve actually had to face, and solve, the problems that Eliezer describes…
… now I can see how dreadfully, terribly wrong that essay is.
(Maybe the problem is that “Well-Kept Gardens Die By Pacifism” was written before Eliezer really started thinking about incentives? Maybe his own frustration with low-quality commenters on Overcoming Bias led him to drastically over-correct when trying to establish principles and norms for the then-new Less Wrong? Maybe he forgot to apply his own advice to his thinking about forum moderation? I don’t know, and a longer and deeper exploration of these musings will have to wait until I write the full version of this post.)
Stiil, I don’t want to give the impression that the essay is completely wrong. Eliezer writes:
I have seen rationalist communities die because they trusted their moderators too little.
But that was not a karma system, actually.
Here—you must trust yourselves.
A certain quote seems appropriate here: “Don’t believe in yourself! Believe that I believe in you!”
Because I really do honestly think that if you want to downvote a comment that seems low-quality… and yet you hesitate, wondering if maybe you’re downvoting just because you disagree with the conclusion or dislike the author… feeling nervous that someone watching you might accuse you of groupthink or echo-chamber-ism or (gasp!) censorship… then nine times of ten, I bet, nine times out of ten at least, it is a comment that really is low-quality.
Downvoting. He’s talking about downvoting. Not banning! That was a mistake this essay could have included, but didn’t. (Perhaps because Eliezer hadn’t thought of it yet? But I do generally default to thinking well of people whose writings I esteem highly, so that is not my first hypothesis.)
And while the karma system has its own problems (of which I have spoken, a few times), nevertheless it’s a heck of a lot better than letting authors ban whoever they want from their posts.
The fact that it’s nevertheless (apparently) not enough—that the combination of downvotes for the bad-but-not-overtly-bannable, and bans for the overly-bannable, is not enough for some authors—this is not some immutable fact of life. It simply speaks poorly of those authors.
Anyhow:
Well-kept gardens do not tend to die by accepting obviously norm-violating content.
Of course they do. That’s exactly how they tend to die. It’s precisely the obviously norm-violating content that is the problem, because if you accept that, then your members learn that your moderators either have an egregious inability to tell the good stuff from the bad, or that your moderators simply don’t care. That is deadly. That is when people simply stop trying to be good themselves—and your garden dies.
And there’s also another way in which well-kept gardens tend to die: when the moderators work to prevent the members from maintaining the garden; when “grass-roots” maintenance efforts—done, most commonly, simply with words—are punished, while the offenders are not punished. That is when those members who contribute the most to the garden’s sanctity—those who put in effort to rebut bad arguments, for instance, or otherwise to enforce norms and practices of good discussion and good thinking—will become disgusted with what they perceive as the moderators betraying the garden to its enemies.
Yes, shockingly, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker!
This seems to be just another way to describe what I wrote in the grandparent, except that your description has the connotation of something fine and reasonable and unproblematic, whereas mine obviously does not.
This seems to me to be the crux of the issue.
There’s a thing that happens in sports and related disciplines wherein the club separates into two different sections, where there is a competition team and there’s everybody else trying to do the sport and have a good time. There are very sharp differences in mindset between the teams.
In the competition team every little weakness or mistake is brutally hammered out of you, and the people on the team like this. It’s making them stronger and better, they signed up for it. But if a beginner tried to join them, the beginner would just get crushed. They wouldn’t get better, and they would probably leave and say their competitive-minded teammates are being jerks.
Without any beginners though, there is no competition team. The competitors all used to be beginners, and would have gotten crushed in the hyperbaric training chamber of their current team culture.
I think you are trying to push for a competition team, and Habryka is not.
Competition teams are cool! I really like them in their time and place. I think the AI Alignment forum is a little bit like this with their invite-only setup (which is a notable feature of many competition teams).
You need the beginner space though. A place where little babblinghalf-formed sprouting ideas can grow without being immediately stomped down for being insufficiently rigorous.
Another angle on the same phenomenon: If you notice someone has a faulty foundation in their house of understanding they are building, there are two fundamentally different approaches one could take. You could either:
Be a Fellow Builder, where you point out the mistake in a friendly way (trying not to offend, because you want more houses of understanding built)
Be a Rival Builder, where you crush the house, thereby demonstrating the faulty foundation decisively. (where you only want the best possible houses to even be built at all, so whether that other builder comes back is irrelevant)
I think Habryka is building LessWrong for Fellows, not Rivals.
wants to work collaboratively with others to figure out what’s true
My impression is that you want LessWrong to be a place of competitive truth-seeking, and Habryka is guiding LessWrong towards collaborative truth-seeking.
I think it’s fine to want a space with competitive dynamics. That’s just not what LessWrong is trying to be.
(I do appreciate the attempt at trying to bridge the epistemic gap, but just to be clear, this does not capture the relevant dimensions in my mind. The culture I want on LessWrong is highly competitive in many ways.
I care a lot about having standards and striving in intense ways for the site. I just don’t think the way Said does it really produces that, and instead think it mostly produces lots of people getting angry at each other while exacerbating tribal dynamics.
The situation seems more similar to having a competitive team where anyone gets screamed at for basically any motion, with a coach who doesn’t themselves perform the sport, but just complaints in long tirades any time anyone does anything, making references to methods of practice and training long-outdated, with a constant air of superiority. This is indeed a common error mode for competitive sports teams, but the right response to that is not to not have standards, it’s to have good standards and to most importantly have some functional way of updating the standards.)
So you want a culture of competing with each other while pushing each other up, instead of competing with each other while pushing each other down. Is that a fair (high-level, abstract) summary?
I think there is something in the space, but I wouldn’t speak in absolutes this way. I think many bad things deserve to be pushed down. I just don’t think Said has a great track record of pushing down the right things, and the resulting discussions seem to me to reliably produce misunderstandings and confusions.
I think a major thing that I do not like is “sneering”. Going into the cultural context of sneering and why it happens and how it propagates itself is a bit much for this comment thread, but a lot of what I experience from Said is that kind of sneering culture, which interfaces with having standards, but not in a super clear directional way.
I think you are trying to push for a competition team, and Habryka is not.
No. This idea was already discussed in the past, and quite definitively rejected. (I don’t have the links to the previous discussions handy, though I’ll try to dig them up when I have some time. But I am definitely not doing anything of the sort.)
What you describe is a reasonable guess at the shape of the disagreement, but I’m afraid that it’s totally wrong.
EDIT: Frankly, I think that the “mystery” has already been solved. All subsequent comments in this vein are, in essence, a smokescreen.
I see the disagreement react, so now I’m thinking maybe LessWrong is trying to be a place where both competitive and collaborative dynamics can coexist, and giving authors the ability to ban users from commenting is part of what makes the collaborators space possible?
“‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”
Commenting to register my interest: I would like to read this essay. As it stands, “Well-Kept Gardens” seems widely accepted. I can say I have internalized it. It may not have been challenged at any length since the original comment thread. (Please correct me with examples.)
My guess is something like more than half of the authors to this site who have posted more than 10 posts that you commented on, about you, in particular. Eliezer, Scott Alexander, Jacob Falkovich, Elizabeth Van Nostrand, me, dozens of others. This is not a rare position. I would have to dig to give you an exact list, but the list is not short, and it includes large fractions of almost everyone who one might consider strong contributors to the site.
We have had this conversation many times. I have listed examples of people like this in the past. If you find yourself still incapable of modeling more than 50% of top authors on the site whose very moderation guidelines you are opining on, after many many many dozens of hours of conversation on the topic, maybe you should just stay out of these conversations, as you are clearly incapable of modeling the preferences of the majority of people who would be affected by your suggested changes to the moderation guidelines.
A good start, if you actually wanted to understand any of this at all, would be to stop strawmaning these people repeatedly by inserting random ellipses and question marks and random snide remarks implying the absurdity of their position. Yes, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker! Yes, it is of course completely possible to be hostile in a plausible deniable way. Indeed, the most foundational essay for the moderation guidelines on this site, mentions this directly (emphasis mine):
Well-kept gardens do not tend to die by accepting obviously norm-violating content. They usually die by people being bad discourse participants in plausible deniable ways, just kind of worse, but not obviously and unambiguously worse, than what has come before. This is moderation 101. Yes, of course authors, and everyone else, will leave, if you fill a space with people just kind of being bad discourse participants, even if they don’t do anything egregious. How could reality work any other way.
You are making false claims. Two of these claims about the views of specific individuals are clearly contradicted by those individuals’ own statements, as I exhibit below.
I reached out to Scott Alexander via Discord on 11 July 2025 to ask if he had “any specific feelings about Said Achmiz and whether he should be allowed to post on Less Wrong”. Alexander issued this statement:
Separately, as I mentioned to you in our meeting of 26 June 2025, in a public comment of 9 October 2018, Jacob Falkovich wrote (bolding added):
Thanks for the follow-up! I talked with Scott about LW moderation a long time ago (my guess is around 2019) and Said’s name came up then. My guess is he doesn’t remember. It wasn’t an incredibly intense mention, but we were talking about what makes LW comment sections good or bad, and he was a commenter we discussed in that conversation in 2019 or so.
I think you can clearly see how the Jacob Falkovich one is complicated. He basically says “I used to be frustrated by you, but this thing made that a lot better”. I don’t remember the exact time I talked to Jacob about it, but it had come up sometime some context where we discussed LW comment sections. It’s plausible to me it was before he made this comment, though it would be a bit surprising to me, since that’s pretty early into LW’s history.
Roll to disbelieve.
I share something like Achmiz’s incredulity, but for me, I wouldn’t call it an inability to model preferences so much as disapproval of how uninterested people are in arguing that their preferences are legitimate and should be respected by adults who care about advancing the art of human rationality.
Achmiz has argued quite eloquently at length for why his commenting style is conducive to intellectual progress. If someone disagrees with that case on the intellectual merits, that would be interesting. But most of the opposition I see seems to appeal not to the intellectual merits, but to feelings: that Achmiz’s comments make authors feel bad (in some way that can’t be attributed to a breach of etiquette rules that could be neutrally enforced), which makes them not want to use the website, and we want people to use the website.
I’m appalled that the mod team apparently takes this seriously. I mean, okay, I grant that you want people to use the website. If almost everyone who might use your website is actually that depraved (which sounds outlandish to me, but you’re the one who’s done dozens of user interviews and would know), I guess you need to accommodate their mental illness somehow for pragmatic reasons. But normatively (dealing with the intellectual merits and not feelings), you see how the problem is with everyone else, not Achmiz, right?
Disagree!
About what? Well specifically the last paragraph. But also I think we fundamentally disagree on what the gradient toward better rationality looks like. As in, what kind of norms should be promoted and selected for.
My view is something like: it’s very important to have a good model for how emotions (like annoyance, appreciation, liking, disliking, that kind of thing) work, and one should take on significant efforts to optimize communication for that, both in personal writing style and with respect to how a community is moderated.
I think your view is probably something like: this is an atrocious idea, WTAF, we should instead try to get away from focusing on feelings since they are the noise rather than the signal, and should judge everything on intellectual merit insofar as that is possible. (Plus a whole bunch of nuance, we probably don’t want to do things that intentionally make other people angry, maybe a bit of hedging is appropriate, maybe taking our own emotions into account to the extent that we can correct for them is good, etc. Idk how exactly you feel about e.g. Leaving a line of Retreat)
Assuming this is roughly correct, I’d first want to pose a hypothetical. Suppose it were in fact the case that my vision of rationality works better, in the sense that communities which are built around the kind of culture I’m envisioning lead to better outcomes. (Not better outcomes as in “people are more coddled and feel better” but in terms of epistemic performance, however that would be measured.) Would this actually be a crux?
I’m starting with this because I’m noticing that your last paragraph-
-does not actually ask this question, but just takes it for granted that not coddling is better. So if coddling were in fact better, would this actually make a difference, or would you still just reject the approach?
Assuming it is a crux, the second thing I’d ask is, why are you confident in this? Wouldn’t your experience suggest that most people on this site aren’t particularly good at this rationality thing? Why did Eliezer get the trans thing wrong? Why did not everyone agree with you immediately when you tried to fix it? (I hope this doesn’t open a huge rabbit hole.)
My estimate would be something like, there are around 2-4 people on this site who are actually capable of the kind of rationality that you’re envisioning, and since one of them is you, there’s like 1-3 others. The median LW person—and not just the median but up to the 80th percentile, at least—is strongly influenced by style/vibes/kindness/fluff/coddling in what they engage with, how long they continue engaging with it, and in how much they update their beliefs. This view seems to me to be very compatible with the drama around Said, everything that happened to you, with which posts are well-received, and really everything I can observe on this site. (I don’t think it’s incompatible with the community’s achievements or the amount of mind-changing that does take place.)
And even if there are more people who are capable of not letting vibes affect their judgment/beliefs/etc (though I’m not conceding that there are), it would still take significantly more effort, and effort is absolutely an important bottleneck. It is important (importantly bad) if something zaps people’s energy. Energy (in the sense of willpower/motivation/etc.) is the relevant currency for getting stuff done, for most people.
Since I think you think that my vision would be terrible if it were realized, one point I want to make is that being nice/considerate/coddling does not actually require you to lie, at all. I know this because I tend to try much harder than most to not make someone feel bad (I think), and I can do it without lying. I was kind of giggling when thinking about how to do that in this comment because in some sense, trying to be nice to you is insulting (because it implies that I don’t respect your ability to be unaffected by vibes). But I decided to do it anyway just because then I can use it an illustration of the kinds of things my model entails. So, here’s an incomplete list of things I’ve done in this comment to make it feel nicer.
Listing my view first rather than yours first (bc the natural flow was list one position and then open the second one with how much it disagrees with the first position—so the default version would have been “here’s what I think you believe, but I think that would actually be very bad because xyz”, but by flipping around I get to trash talk my view rather than yours)
Using derogatory language for my position (“coddling”)
Including a compliment
Lots of other details about how I write to make it sound less arrogant, which have become even more automatic at this point than the stuff above and it’d actually take significant effort to not do them. (Using ! rather than . for some sentences is an example, it tends to be status-lowering.)
This is all I think pretty typical stuff that I do all the time when I communicate with people via text on important things (usually without calling attention to it). I used to not do any of it, and in my experience, my current style of communicating works vastly better. (It works tremendously better outside LW, but I still think it works significantly better on LW as well.) And it didn’t require me to lie, or even water down my argument. A nice feature about how most humans work is that their emotions are actually determined more by platitudes and status comparisons than your actual position, which means you can usually even tell them that you think they’re completely wrong without making them feel bad, if you package it right. In fact, I believe that the kind of norms you’re envisioning would be a disaster if they were enforced by the mod team, but given how I’ve written my remaining comment, I think I could get away with saying this even if I were talking to the median LW user, without making them feel animosity toward me.
(I realize that I’ve just been talking about 1-1 interactions but this is a public forum, will get to that now.)
So when having a model like the one I’ve sketched out, the idea that we should step in if a user makes other users uncomfortable seems completely reasonable on first glance. (Like, most people aren’t in fact that good at rationality, it’ll make them annoyed, less rational, zap their energy, seems like a clear net negative.) Now Said said here that the value of his comments isn’t about what the author feels like, it’s about the impact on the whole forum. Very good point, but...
… these things aren’t actually separate. It’s not like vibes exist independently between any two people in a discussion. They are mostly independent for each top-level comment thread. But if A makes a post, B leaves a top-level comment that’s snarky and will hurt the feelings of A, then I’m not gonna go in there and to talk to B as if A didn’t exist. I know (or at least am always assuming) that A is present in the conversation whether they reply or not because it’s their post (and I know I care a ridiculous amount about comments on (most of) my posts). This completely colors the subsequent conversation.
As for Said specifically, I have no memories of being upset about his comments on my posts (it’s possible it happened and I forgot), but I have many (non-specific) memories of seeing his comments on different posts and being like, “ohh no this is not going to be helpful :(” even though iirc I agree with him more often than not. My brutally honest estimate as for the total impact of these comments is that it lands below neutral. I’m not super confident in this—but I’m very confident that the net impact would be a lot more positive if he articulated identical points in a different style. The claim that a lot of people had issues with him strikes me as plausible. As I said, I think there’s just not much of a tradeoff here. I mean there’s a tradeoff for the commenter since it takes effort to be nice. But there’s not much of a tradeoff for the product (the comment). Maybe it’ll be longer, but, I mean.
Counterpoint: I’m much more vibe-sensitive than the median LW user, so even if most people’s rationality will be damaged by having an unfriendly comment directed at them, maybe most of them won’t care if they just see B being unfriendly to A. My response: definitely directionally true; this is why I’m not confident that Said’s comments are a net negative. Maybe they’re a net positive because of effect on other people.
Another counterpoint: maybe B being rude to A colors the vibe initially but not if it spawns a huge comment thread between D and E about something only vaguely related to the original post; and that point it doesn’t matter whether B was nice to A (but B made it happen with their initial response). My response: also true, still not enough to overturn my conclusion.
(More just explaining my model.)
I don’t think there is altogether much evidence that the instrumental rationality part of the sequences is effective. (Like How To Actually Change Your Mind.) I completely grant that LW is vastly better than the rest of the internet at people changing their mind, but that can be equally explained by people who are already much better at changing their mind being drawn into one community.
One reason is that LW still sucks at this, even if the rest of the internet sucks way more. But the more important reason is that if you observe how mind change happens when it does happen, well it rarely looks like someone applying a rationality technique from the sequence—and when it does look like that, it’s probably either a topic that the person wasn’t that invested in in the first place, or the person is you.
I think the overarching problem here is that Eliezer didn’t have a good model of how the brain works, and LW still doesn’t have it today, and because of that, rationality techniques as taught in the sequences are just not going to be a very effective; you’re not going to be good at manipulating a system if all your models for how the system works are terrible. (Ironically beliefs about how the brain works a prime example of the category of belief that is now very sticky and almost impossible to change with those tools.) There was a tweet, I don’t have a link anymore, where someone said that the main thing people got out of the sequences was just this vibe that a lot more was possible. I think this is true, the discussion on LW about it that I remember seemed to take it seriously, but like, what a gigantic indictment of the entire project! His understanding of the brain sucked so bad that his entire collection of plans operating in his framework was less effective than a single out-of-model effect that he didn’t understand or optimize for! If this is even slightly true, it clearly means that we should care a hell of a lot more about vibes than we currently do, not less! (Though, obligatory disclaimer that even if the sequences functioned 100% only as a community-building tool, which is a more extreme claim than what I think is true, they would probably still have been worth it.)
In case it wasn’t clear, I think all the caring about vibes is entirely justified as an instrumental reason. I do think it’s also terminally good if people feel better, but I think everything I said holds if we assign that 0 weight.
I agree that it’s important to optimize our vibes. They aren’t just noise to be ignored. However, I don’t think they exist on a simple spectrum from nice/considerate/coddling to mean/callous/stringent. Different vibes are appropriate to different contexts. They don’t only affect people’s energy but also signal what we value. Ideally, they would zap energy from people who oppose our values while providing more energy to those who share our values.
Case in point, I was annoyed by how long and rambly your comment was and how it required a lot of extra effort to distill a clear thesis from it. I’m glad you actually did have a clear thesis, but writing like that probably differentially energizes people who don’t care.
Thanks for this interesting comment!—and for your patience. I really appreciate it.
I absolutely agree with that statement; the problem is that I think not-lying turns out to be a surprisingly low standard in practice. Politicians and used car salesmen are very skilled at achieving their desired changes on people’s beliefs and behavior without lying, by listing a bunch of true positive-vibe facts about the car and directing attention away from the algorithm they’re using to decide what not to say—or what evidence not to look for, prior to even saying anything.
The most valuable part of the Sequences was the articulation of a higher standard than merely not-lying—not just that the words you say are true, but that they’re the output of a search process that would have returned a different answer if reality were different. That’s why a key thing I aspire to do with my writing is to reveal (a cleaned-up refinement of) my thought process, not just the conclusion I ended up at. On the occasions when I’m trying to sell my readers a car, I want them to know that, so that they know that they need to read other authors to learn about reasons to not buy the car (which I haven’t bothered to come up with). The question to be asking is not, “Is this lying?—if not, it’s permissible”, but, “Is this maximally clear?—if not, maybe I can do better.”
All this to say that I’m averse to overtly optimizing the vibes to be more persuasive, because I don’t want to persuade people by means of the vibes. That doesn’t count! The goal is to articulate reasoning that gets the right answer for the right reasons, not to compute actions to cause people to agree with what I currently think is the right answer.
But you know all that already. I think you’re trying to advocate not so much for making the vibes persuasive, but for making sure the vibes aren’t themselves anti-persuasive in a way that prevents people from looking at the reasoning. I think I’m in favor of this! That’s why I’m so obsessed with telling abstract parables with “timeless” vibes—talk about bleggs and rubes, talk about the Blue and Green teams, talk about Python programs that accept each other’s outputs as inputs—talk about anything but real-world object-level disputes that motivate seeking recourse in philosophy, which would be distracting. (I should mention that this technique has the potential failure mode of obfuscating object-level details that are genuinely relevant, but I’m much less worried about that mattering in practice than some of my critics.)
But that kind of “avoid unnecessarily anti-persuasive vibes” just doesn’t seem to be what’s at issue in these repeated moderation blow-ups?
Commenters pointed out errors in my most recent post. They weren’t overtly insulting; they just said that my claim was wrong because this-and-such. I tried to fix it, but still didn’t get it right. (Embarrassing!) I didn’t take it personally. (The commenters are right and my post as written is wrong.) I think there’s something pathological about a standard that would have blamed the commenters for not being nice enough if I had taken it personally, because if I were the type to take it personally, them being nicer wouldn’t have helped.
Crucially, I don’t think this is this a result of me having genetically rare superhuman rationality powers. I think my behavior was pretty normal for the subject matter: you see, it happened to be a post about mathematics, and the culture of mathematics is good at training people to not take it personally when someone says “Your example doesn’t work because this-and-such.” If I’m unusually skilled at this among users of this website, I think that speaks more to this website being a garbage dump rather than me being great. (I think I want to write a top-level post about this aspect of math culture.)
Sneaky! (I’m embarrassed that I didn’t pick up on this being a deliberate conciliatory tactic until you flagged it.)
Only under a pretty generous interpretation of knowing. I certainly didn’t have a good model for this standard of communication when I wrote my comment, which I agree is much higher than just not lying. (And I’ve been too lazy to read your posts on this in the past, even though I’ve seen them a few times.)
But, I think caring for vibes is compatible with this standard as well. The set of tools you have to change vibes is pretty large, and imE it’s almost always possible to adjust them not just without lying, but while still explaining the actual reasons for why you believe the thing you’re arguing for.
I do think that’s the issue.
So, this is the comment that was causally upstream of all the recent discussion under this post here.
The vibes of this comment are, imo, very bad, and I think that’s the reason why Gorden complained about it. Four people voted it as too combative (one of them being Gorden himself).
habryka said that Said triggers a sizeable part of all complaints on this site, so I guess there’s not really a way to talk about this in non-specific terms, so I’ll just say that I think this is a very central example, and most other cases of where people complain about Said are like this as well.
Could one have written a comment that achieves the same things but has better vibes? In my opinion, absofuckinglutely! I could easily write such a comment! (If that’s a crux, I’m happy to do it.) I have many disagreements with Said (as demonstrated in the other comment thread), but maybe the biggest one is that changing presentation is changing content. Sure that’s literally true in a very narrow sense, but I think practically it’s just completely wrong. (I mean now I’m just repeating my claim from the second paragraph.)
(I agree that the religion post had issues and imo Said pointed out one of them. Conversely, I saw the post, figured I’d disagree with it, and deliberately declined to read and write a response, as I often do. Which is to say, I agree that there was some value to Said writing it, whether it’s a net positive or not.)
Right, but this example looks highly dissimilar to me. Gurkenglas was being very brief/minimalistic, which could be considered a little rude, but the (a) context is completely different (this was a low-stakes situation in terms of emotional investment, what he said doesn’t invalidate the post at all, and he was correcting an objective error—all of this different from Gorden’s post), and (b) Said’s comment still has actively worse vibes. (And Gurkenglas’ comment seems to be the only one that could even be in considered rude; the other two people who commented were being actively nice.) So, I agree that any standard that would make these comments not okay would be extremely bad. I also agree that your reaction, while good, is not particularly special, in the sense that probably most people would have dealt with this just fine.
I don’t think you can. The reason why the comment in question has aggressive vibes is because it’s clearly stating things that Worley predictably won’t want to hear. The way you write something that includes the same denotative claims with softer vibes is by means of obfuscation: adding a lot of puffy hedging veribage that makes it easier for a distracted or conflict-averse reader to skim over the comment’s literal words without noticing what a rebuke is intended. The obfuscated version only achieves the same things in the minds of sufficiently savvy readers who can reverse the vibe-softening distortion and infer the original intent.
Strong disagree. Said’s comment does several things that have almost no function except to make vibes worse, which means you can just take those out, which will make the comment shorter. I will in fact add in a little bit of hedging and it will still be shorter overall because the hedging will require fewer words than the unnecessary rudeness.
Here’s Said’s comment. Here’s a not-unnecessarily-rude-but-still-completely-candid-version-that’s-actually-166-characters-shorter-than-the-original-and-that-I-genuinely-think-achieves-the-same-thing-and-if-not-I’d-like-to-hear-why-not
This takes it down from about an 8⁄10 rudeness to maybe a 4 or 5. Is anyone going to tell me that this is not sufficiently blunt or direct? Will non-savvy readers have to read between the lines to figure out that this is a rebuttal of the code idea? I think the answer is clearly no; if people see this comment, they will immediately view it as a rebuttal of the post’s thesis.
The original uses phrases like
This is not any more direct than saying
These two messages convey exactly the same information, the first just has an additional layer of derision/mockery which the second doesn’t. (And again, the second is shorter.) And I know you know this difference because you navigate it in your own writing, which is why I’m somewhat irritated that you’re talking as if Said’s comments were just innocently minimalistic/direct.
edit: corrected typo
Thanks, that was better than most language-softening attempts I see, but …
Similar information, but not “exactly” the same information. Deleting the “very harmful false things” parenthetical omits the claim that the falsehoods promulgated by organized religion are very harmful. (That’s significant because someone focused on harm rather than epistemics might be okay with picking up harmless false beliefs, but not very harmful false beliefs.) Changing “very quickly you descend” to “you can descend” alters the speed and certainty with which religious converts are claimed to descend into nebulous and vague anti-epistemology. (That’s significant, because a potential convert being warned that they could descend into anti-epistemology might think, “Well, I’ll be extra careful not to do that, then,” whereas a warning that one very quickly will descend is less casually brushed off.)
That’s what I meant by “obfuscation” in the grandparent: the softer vibes of no-assertion-of-harmfulness versus “very harmful false things”, and of “can descend” versus “very quickly descend”, stem from the altered meanings, not just from adjusting the vibes while keeping the meanings constant.
It’s not that I don’t know the difference; it’s that I think the difference is semantically significant. If I more often use softer vibes in my comments than Said, I think that’s probably because I’m a less judgemental person than him, as an enduring personality trait. That is, we write differently because we think differently. I don’t think website moderators should require commenters to convincingly pretend to have different personalities than they actually have. That seems like it could be really bad.
Okay—I agree that the overall meaning of the comment is altered. If you have a categorical rule of “I want my meaning to be only this and exactly this, and anything that changes it is disqualified” then, yes, your object is valid. So consider my updated position to be something like, “your standard (A) has no rational justification, and also (B) relies a false model of how people write comments.” I’ll first argue (A), then (B).
It is logically coherent to have the () reactions. But do you think it’s plausible? What would be your honest probability assessment that a religious person reads this and actually goes that route—as in, they accept the claims of the comment but take the outs you describe in () -- whereas if they had read Said’s original comment instead, they’d still accept the premises, and this time they’d be convinced?
Conversely, one could imagine that a religious person reads Said’s version and doesn’t engage with it because they feel offended, whereas they would have engaged with it, and that the same person would have engaged with my version. (Which, obviously, I’d argue is more likely.)
At this point, my mental model of you responds with something like
To which I say, okay. Fine. I don’t think there is a slippery slope here, but I think arguing this is a losing battle. So I’ll stop with (A) here.
My case for (B) is that the algorithm which produced Said’s message didn’t take of these details into account, so changing them doesn’t censor or distort the intent behind the message. Said didn’t run an assessment of how harmful the consequences are exactly, determined that they’re most accurately described as “very harmful” rather than “harmful” or “extremely harmful”, and then posted it. Ditto with the other example.
I’m not sure if how much of any evidence I need here to make this point, but here are some ways in which you can see that the above is true
if you did consider the meaning to this level of detail, then you wouldn’t write “very quickly you descend” because well, you might not descend, it’s not 100%, so you’d have to qualify this somehow.[2]
Thinking this carefully about the content of your messages takes a lot of time. Said doesn’t take this much time for his comments, which is how he can respond so quickly.
If you thought about the actual merits of the proposal, then you’d scrap the entire second half of the comment, which is only tangentially relevant to the actual crux. You would be far more likely to point out that a good chunk of the post relies on this sentence
… which is not justified in the post at all. This would be a vastly more useful critique!
So, you’re placing this extreme importance on the precise semantic meaning of Said’s comment, when the comment wasn’t that well thought-out in the first place. I’d be much more sympathetic to defending details of semantic meaning if those details had been carefully selected.
The thing that’s frustrating to me—not just this particular point in this conversation but the entire vibes debate—and which I should have probably pointed out much earlier—is that being more aware of vibes makes your messages less dependent on them, not more. Because noticing the influence allows you to adjust. If you realize a vibe is pushing you to write X, you can then be like, hold on that’s stupid, let me instead re-assess how whatever I’m responding to right now actually impacts the reasons why I believe the thing I believe. And then you’ll probably notice that what you’re pushed to write doesn’t really hit the crux at all and instead scrap it and write something else. (See the footnote[3] for examples in this category.)
To put it extremely bluntly, the thing that was actually causally upstream of the details in Said’s message was not a careful consideration of the factual details; it was that he thinks religion is dumb and bad, which influenced a parameter sent to the language-generation module that output the message, which made it choose language that sounded more harsh. This is why it says “perfect example” and not “example”, why the third paragraph sounds so dismissive, why the message contains no !s, why he said “very quickly you descend” rather than “you can descend”, and so on. The vibe isn’t an accidental by-product, it’s the optimization target! Which you can clearly observe by the changes I’ve pointed out here.
… and on a very high level, to just give a sense of my actual views on this, the whole thing just seems ridiculously backwards in the sense that it doesn’t engage with what our brains are actually doing. Like I think it happens to be the case that not listening to vibes is often better (although this is a murky distinction because a lot of good thought relies on what are essentially vibes as well—it’s ultimately a form of computation), but the broader point is that, whatever you want to improve, more awareness of what’s actually going going to be good. Knowledge is power and all that.
If you don’t think this, then that would be a crux, but also I’d be very surprised and, not sure how I’d continue the conversion then, but for now I’m not thinking too much about this.
This is absurdly nit-picky but as are the changes you pointed out.
Alright for example, the first thing I wrote when responding to your comment was about you quoting me saying “These two messages convey exactly the same information”. I actually meant to refer to the specific line I quoted only, where this statement was more defensible. But I asked myself, “does this actually matter for the crux” and the answer was no, so I scrapped it. The same thing is true for me quoting Gordon’s response and pointing out that it fits better with my model than yours, and a snide remark about how your () ascribes superhuman rationality powers to religious people in particular.
Now you may be like, well those are good things, but that’s different from vibes. But it’s not really, it’s the same skill of, notice what your brain is actually doing, and if it’s dumb, interfere and make it do something else. More introspection is good.
I guess the other difference is that I’m changing how I react here rather than how someone else reacts. I guess some people may view one as super good and the other as super bad (e.g., gwern’s comment gave off that vibe to me). To me these are both good for the same reason. Deliberately inserting unhelpful vibes into your comment is like uploading a post with formatting that you know will break the editor and then being like “well the editor only breaks because this part here is poorly programmed, if it were programmed better then it would do fine”. In any other context this would-pattern match to obviously foolish behavior. (“I don’t look before crossing the sidewalk because cars should stop.”) It’s only taken seriously because people are deluded about the degree to which vibes matter in practice.
Anyway, I think you get the point. In retrospect I should have probably structured a lot of my writing about this differently, but can’t do that now.
Sorry, phrasing it in terms of “someone focused on harm”/”a potential convert being warned” might have been bad writing on my part, because what matters is the logical structure of the claim, not whether some particular target audience will be persuaded.
Suppose I were to say, “Drug addiction is bad because it destroys the addict’s physical health and ability to function in Society.” I like that sentence and think it is true. But the reason it’s a good sentence isn’t because I’m a consequentialist agent whose only goal is to minimize drug addiction, and I’ve computed that that’s the optimal sentence to persuade people to not take drugs. I’m not, and it isn’t. (An addict isn’t going to magically summon the will to quit as a result of reading that sentence, and someone considering taking drugs has already heard it and might feel offended.) Rather, it’s a good sentence because it clearly explains why I think drug addiction is bad, and it would be dishonest to try to persuade some particular target audience with a line of reasoning other than the one that persuades me.
I don’t think those are good metaphors, because the function of a markup language or traffic laws is very different from the function of blog comments. We want documents to conform to the spec of the markup language so that our browsers know how to render them. We want cars and pedestrians to follow the traffic law in order to avoid dangerous accidents. In these cases, coordination is paramount: we want everyone to follow the same right-of-way convention, rather than just going into the road whenever they individually feel like it.
In contrast, if everyone writes the blog comment they individually feel like writing, that seems good, because then everyone gets to read what everyone else individually felt like writing, rather than having to read something else, which would probably be less informative. We don’t need to coordinate the vibes. (We probably do want to coordinate the language; it would be confusing if you wrote your comments in English, but I wrote all my replies in French.)
Right, exactly. He thinks religion is dumb and bad, and he wrote a comment that expresses what he thinks, which ends up having harsh vibes. If the comment were edited to make the vibes less harsh, then it would be less clear exactly how dumb and bad the author thinks religion is. But it would be bad to make comments less clearly express the author’s thoughts, because the function of a comment is to express the author’s thoughts.
Absolutely. For example, if everyone around me is obfuscating their actual thoughts because they’re trying to coordinate vibes, that distortion is definitely something I want to be tracking.
The feeling is mutual?!
Oh. Oh. So you agree with me that the details weren’t that well thought out (or at least didn’t bother arguing against it), and ditto about the net effects, but you don’t think it matters (or at any rate, isn’t the important point) because you’re not trying to optimize positive effects, but just honest communication...?
This is not what I thought your position was, but I guess it makes sense if I try to retroactively fit it. This means most (all?) of my objections don’t apply anymore. Like, yeah, if you terminally value authentically representing the author’s emotional state of mind, then of course deliberately adjusting vibes is a net negative for your values.
(I think this completely misses the point I was trying to make, which is that “I will do X which I know will have bad effects, but I’ll do it anyway because the reason it has bad effects is that other people are making mistakes, so it’s not me who should change X, but other people who should change” is recognized as dumb for almost all values of X, especially on LW—but I also think this doesn’t matter anymore, either, because the argument is again about consequences, which you just demoted as the optimization target. If you agree that it doesn’t matter anymore, then no need to discuss this more.)
I guess now I have a few questions
Why do you have this position? (i.e., that comments aren’t about impact). Is this supposed to be, like, the super obvious message that was clearly the main point of the sequences, or something like that?
Is your default model of LWians that most of them have this position?
You said earlier that the repeated moderation blow-ups aren’t about bad vibes. I feel like what you’ve said since justifies why you think Said’s comments are good, but not that they aren’t about vibes—like even with everything you said here, it still seems like the causal stream here is clearly bad vibes → people complain to harbyka → Said gets in trouble? (This isn’t super important, but still felt worth asking.)
Because naïvely optimizing for impact requires concealing or distorting information that people could have used to make better (more impactful) decisions in ways that can’t realistically be anticipated by writers naïvely optimizing for impact.
Here’s an example from Ben Hoffman’s “The Humility Argument for Honesty”. Suppose my neck hurts (coincidentally, after trying a new workout routine), and after some internet research, I decide I have neck cancer. The impact-oriented approach would call for me to do my best to convince my doctor I have neck cancer, to make sure that I get the chemotherapy I’m sure I need. The honesty-oriented approach would call for me to explain to my doctor the evidence and reasoning for why I think I have neck cancer.
Maybe there’s something to be said for the impact-oriented approach if my self-diagnoses are never wrong. But if there’s a chance I could be wrong, the honesty-oriented approach is much more robust. If I don’t really have neck cancer and describe my actual symptoms, the doctor has a chance to help me discover my mistake.
No. But that’s OK with me, because I don’t regard “other people who use one of the same websites as me” as a generic authority figure.
Yes, that sounds right. As you’ve gathered, I want to delete the second arrow rather than altering the value of the “vibes” node.
Was definitely not going to make an argument from authority, just trying to understand your world view.
Iirc we’ve touched on four (increasingly strong) standards for truth
Don’t lie
(I won’t be the best at phrasing this) something like “don’t try to make someone believe things for reasons that have nothing to do with why you believe it”
Use only the arguments that convinced you (the one you mentioned here
Make sure the comment accurately reflects your emotional state[1] about the situation.
For me, I endorse #1, and about 80% endorse #2 (you said in an earlier comment that #1 is too weak, and I agree). #3 seems pretty bad to me because the most convincing arguments to me don’t have to be the most convincing arguments the others (and indeed, they’re often not), and the argument that persuaded me initially especially doesn’t need to be good. And #4 seems extremely counter-productive both because it’ll routinely make people angry and because so much of one’s state of mind at any point is determined by irrelevant variables. It seems only slightly less crazy than—and in fact very similar to—the radical honesty stuff. (Only in the most radical interpretation of #4 is like that, but as I said in the footnote, the most radical interpretation is what you used when you applied it to Said’s commenting style, so that’s the one I’m using here.)
This is not a useful example though because it doesn’t differentiate between any two points on this 1-4 scale. You don’t even need to agree with #1 to realize that trying to convince the doctor is a bad idea; all you need to do is realize that they’re more competent than you at understanding symptoms. A non-naive purely impact based approach just describes symptoms honestly in this situation.
My sense is that examples that prefer something stronger than #2 will be hard to come up with. (Notably your argument for why a higher standard is better was itself consequentialist.)
Idk, I mean we’ve drifted pretty far off the original topic and we don’t have to talk any more about this if you’re not interested (and also you’ve already been patient in describing your model). I’m just getting this feeling—vibe! -- of “hmm no this doesn’t seem quite right, I don’t think Zack genuinely believed #1-#4 all this time and everything was upstream of that, this position is too extreme and doesn’t really align with the earliest comment about the moderation debate, I think there’s still some misunderstanding here somewhere”, so my instinct is to dig a little deeper to really get your position. Although I could be wrong, too. In any case, like I said, feel free to end the conversation here.
Re-reading this comment again, you said ‘thought’, which maybe I should have criticized because it’s not a thought. How annoyed you are by something isn’t an intellectual position, it’s a feeling. It’s influenced by beliefs about the thing, but also by unrelated things like how you’re feeling about the person you’re talking to (RE what I’ve demonstrated with Said).
Right. Sorry, I think I uncharitably interpreted “Do you think others agree?” as an implied “Who are you to disagree with others?”, but you’ve earned more charity than that. (Or if it’s odd to speak of “earning” charity, say that I unjustly misinterpreted it.)
Right. I tried to cover this earlier when I said “(a cleaned-up refinement of) my thought process” (emphasis added). When I wrote about eschewing “line[s] of reasoning other than the one that persuades me”, it’s persuades in the present tense because what matters is the justifactory structure of the belief, not the humdrum causal history.
There’s probably a crux somewhere near here. Your formulation of #4 seems bad because, indeed, my emotions shouldn’t be directly relevant to an intellectual discussion of some topic. But I don’t think that gives you license to say, “Ah, if emotions aren’t relevant, therefore no harm is done by rewriting your comments to be nicer,” because, as I’ve said, I think the nicewashing does end up distorting the content. The feelings are downstream of the beliefs and can’t be changed arbitrarily.
I want to note that I dispute that you demonstrated this.
FWIW, I absolutely do not think that the “softened” version would be more likely to be persuasive. (I think that the “softened” version is much worse, even more so than Zack does.)
Wrong:
I’ve refrained from asking this question until now, but at this point, I really have to:
What, exactly, do you mean when you say “vibes”?
There’s maybe a stronger definition of “vibes” than Rafael’s “how it makes the reader feel”, that’s something like “the mental model of the kind of person who would post a comment with this content, in this context, worded like this”. A reader might be violently allergic to eggplants and would then feel nauseous when reading a comment about cooking with eggplants, but it feels obvious it wouldn’t then make sense to say the eggplant cooking comment had “bad vibes”.
Meanwhile if a poster keeps trying to use esoteric Marxist analysis to show how dolphin telepathy explains UFO phenomena, you’re might start subconsciously putting the clues together and thinking “isn’t this exactly what a crypto-Posadist would be saying”. Now we’ve got vibes. Generally, you build a model, consciously or unconsciously, about what the person is like and why they’re writing the things they do, and then “vibes” are the valence of what the model-person feels like to you. “Bad vibes” can then be things like “my model of this person has hidden intentions I don’t like”, “my model of this person has a style of engagement I find consistently unpleasant” or “my model is that this person is mentally unstable and possibly dangerous to be around”.
This is still somewhat subjective, but feels less so than “how the comment makes the reader feel like”. Building the model of the person based on the text is inexact, but it isn’t arbitrary. There generally needs to be something in the text or the overall situation to support model-building, and there’s a sense that the models are tracking some kind of reality, even though inferences can go wrong, different people can pay attention to very different things. There’s still another complication that different people also disagree on goals or styles of engagement, so they might be building the same model and disagree on the “vibes” of it. This still isn’t completely arbitrary, most people tend to agree that the “mentally unstable and possibly dangerous to be around” model has bad vibes.
Basically the sum of what a post or comment will make the reader feel. (This is not the actual definition because the actual definition would require me to explain what I think a vibe is at the level of the brain, but it’s good enough.)
Technically this is a two-place function of post and reader because two different people can feel very different things from reading the same thing, so strictly speaking it doesn’t make sense to say that a comment has bad vibes. But in practice it’s highly correlated. So when I say this comment has bad vibes, it’s short for, “it will have bad vibes for most readers”, which I guess is in turn short for, “most people who read this will feel things that are detrimental for having a good discussion”.
To give the most obvious example in the specific comment, the sentence
sounds very combative (i.e., will generally will evoke adversarial feelings). And tbc this will also be true for people who aren’t the author because we’ve evolved to simulate how others feel; that’s why you can feel awkward watching an awkward scene in a movie.
BTW I think asking me what I mean by vibes is completely reasonable. Someone strong-downvoted your comment I guess because it sounds pedantic but I don’t agree with this, I don’t think this is a case where the concept so obvious that you shouldn’t ask for a definition. (I strong-upvoted back to 0.)
I see, thanks.
Well, I think that the concept of “vibes” (of a comment), as you are using the term to mean, is fundamentally a broken one, because it abstracts away from highly relevant causal factors.
Here’s why I say that. You say:
And there are two problems with this.
First, you correctly acknowledge that different readers can have different reactions, but your dismissal of this objection with the claim that “it’s highly correlated” is a mistake, for the simple reason that the variation in reactions is not randomly distributed across readers along relevant dimensions. On the contrary, it’s highly correlated with a variety of qualities which we have excellent reason to care about (and which we might collectively summarize as “likelihood of usefully contributing to advancement of rationality and the accomplishment of useful goals”).
Second, whether there is in fact some connection (and what that connection is) between whether some comment “sounds very combative”, and whether that comment “will generally evoke adversarial feelings” (these are in fact two different things, not one thing phrased in two different ways!), and between the latter and whether a good discussion ensues, are not immutable facts! They are amenable to volitional alteration, i.e. you can choose how (or if!) these things affect one another, because you do in fact (I assume) have control of your actions, your words, your reasoning process, etc. (And to the extent that you do not have such control—well, that is a flaw, which you ought to be trying to fix. Or so I claim! Perhaps you disagree; but in order for us to resolve this disagreement, we must be able to refer to it—which we cannot do if we simply encode, in the term “vibes”, the assumption that the model I describe here is wrong.)
To speak of the “vibes” of a comment abstracts away from (and thus obscures) this critical structure in the patterns of how people react to comments.
P.S.:
It’s not “someone”, it’s very obviously @habryka. (Who else would strong-downvote all of my comments on this post, so consistently and so quickly after they get posted, and with a vote weight of 10—if not the person who gets notifications whenever comments get posted on this post, and who in fact has a vote weight of 10?)
I definitely don’t agree with this. Especially in this particular case, I think almost everyone will have the same reaction, and I don’t think people who don’t have this reaction are meaningfully better at rationality. (In general, I don’t think the way to improve your rationality is to make yourself as numb as possible.)
that’s because I phrased it poorly. I was trying to gesture at the same feeling with both, I just don’t know what to call it. Like, the feeling that the situation you’re in has become adversarial. I think it’s a weaker version of what you’d feel if you were in a group conversation and suddenly one person insults someone else, or something like that.
I completely agree with this, but “you can theoretically train yourself to not be bothered by it” is true for a lot things, and no one thinks that we should therefore give people a free pass to do them. You can train yourself to have equanimity to physical pain; presumably this wouldn’t make it okay for me to inflict physical pain on you. You need more pieces to argue that we should ask people to self-modify to not have the reaction, rather than avoid triggering the reaction.
In this case, that strikes as not reasonable. This particular reaction (i.e., having this adversarial feeling that I failed to describe well in response to the line I quoted) seems both very hard to get rid of, and probably not desirable to get rid of. There’s a very good evolutionary reason why we have it, to detect conflict, and still seems pretty valuable today. I think I’m unusually sensitive to this vibe, and I think this is pretty useful to navigate social situations. Spotting potential conflict early is useful, this stuff is relevant information.
This may well be true, but surely you see that the “almost” is doing quite a bit of work here, yes?
I mean, think of all the true statements we might make, of the form “Almost everyone will X”. And now consider how many of them stop being true if we quantify “everyone” not over the population of the Earth, but over the commentariat of this forum. There’s a lot of those!
So, is your claim here one of the latter sort? Surely we can’t assume that it isn’t, right?
And even supposing that it’s not, we still have this—
What makes one better at rationality is behaving as if one does not have said reaction (or any reaction at all). Whether that’s because the reaction is absent, or because it’s present but controlled, is not really important.
I wholly reject this framing. This is just a thoroughly tendentious way of putting things. We are not talking about some important information which you’re being asked to ignore. We’re talking about having an emotional reaction which interferes with your ability to consider what is being said to you. The ability to not suffer that detrimental effect is not “numbness”.
Right, but the key point here is that the sentence you quoted isn’t actually anything like one person insulting someone else. You say “weaker version”, but that’s underselling the difference, which is one of kind, not merely of degree.
I’ve said something like this before, but it really bears repeating: if someone reads a paragraph like this one—
—and experiences this as something akin to a personal insult, which seriously impacts their ability to participate in the conversation, then this person is simply not ready to participate in any kind of serious discussion, period. This is the reaction of a child, or of someone who hasn’t ever had to have any kind of serious adult conversation. Being able to deal with straightforward statements like this is a very low bar. It’s a low bar even for many ordinary professional contexts, never mind for Less Wrong (where the bar should be higher).
Of course, but the pieces in question seem rather obvious to me. But sure, let’s make them explicit:
You punching me doesn’t meaningfully contribute anything to the discussion; it doesn’t communicate anything of substance. Conversely, the sort of comment we’re discussing is the most effective and efficient way of communicating the relevant object-level point.
You punching me is a unilateral action on your part, which I cannot avoid (presumably; if I consent to the punch then that’s a very different matter, obviously). On the other hand, nobody’s forcing you to read anything on Less Wrong.
There’s no “theoretically” about it; it’s very easy to not be bothered by this sort of thing (indeed, I expect that when being bothered by comments like the example at hand is not rewarded with status, most people simply stop being bothered by them, without any effort on their part). (Contrast this with “train[ing] yourself to have equanimity to physical pain”, which is, as far as I know, not easy.)
Not being bothered by this sort of thing is good (cf. the earlier parts of this comment); being bothered by it is bad. Conversely, not being bothered by pain is probably bad (depending on what exactly that involves).
Finally, please note that “we should ask people to self-modify to not have the reaction” is a formulation which presupposes a corrective approach. I do not claim that corrective approaches are necessarily the wrong ones in this case, but there is no reason to assume that they’re the best ones, much less the only ones. Selective (and, to a lesser extent, structural) approaches are at least as likely as corrective ones to play a major role.
I strongly disagree with both parts of this claim. (See above.)
But that’s just the thing: you shouldn’t be thinking of object-level discussions on LW as “social situations” which you need to “navigate”. If that’s how you’re approaching things, then of course you’re going to have all of these reactions—and you’ve doomed the whole enterprise right from the start! You’re operating on too high a simulacrum level. No useful intellectual work will get done that way.
I was actually already thinking about just people on LessWrong when I wrote that. I think it’s almost everyone on LessWrong.
Right, I mean, you’re repeatedly and categorically framing the problem as solely lying with the person who gets bothered by emotions. You’ve done the same in the previous post where I opted out of the discussion.
It’s not my view at all. I think a community will achieve much better outcomes if being bothered by the example message is considered normal and acceptable, and writing the example message is considered bad.
I don’t know how to proceed from here. Note that I’m not trying to convince you, I’m only responding. What I can say is, if you are trying to convince me, you have to do something other than in this comment, because I felt like you primarily told me things that I already understood from the other comment thread (where I truncated the discussion). In particular, there are a lot of times where you’re just stating something as if you expect me to agree with it (like all the instances I quoted), but I don’t—and again I feel like I already knew that you think this from the other comment.
For completeness:
This argues that the pain thing is different; I agree it’s different; it doesn’t mean that self-modificaiton (or selection) is desirable here.
I already said that I think ~everyone is bothered by it, so obviously, disagree. (I don’t even believe that you’re not bothered by this kind of thing;[1] I think you are and it does change your conduct as well, although I totally believe that you believe you’re not bothered.)
Actually I technically do agree with this—in the sense that, if you could flip a switch where you’re not bothered by it but you still notice the vibe, that would be good—but I think it’s not practically achievable so it doesn’t really matter.
This is something I usually wouldn’t say out of politeness/vibe protection, but since you don’t think I should be doing that, saying it kind of feels more respectful, idk.
That’s a strange position to hold on LW, where it has long been a core tenet that one should not be bothered by messages like that. And that has always been the case, whether it was LW2, LW1 (remember, say, ‘babyeaters’? or ‘decoupling’? or Methods of Rationality), Overcoming Bias (Hanson, ‘politics is the mindkiller’), SL4 (‘Crocker’s Rules’) etc.
I can definitely say on my own part that nothing of major value I have done as a writer online—whether it was popularizing Bitcoin or darknet markets or the embryo selection analysis or writing ‘The Scaling Hypothesis’—would have been done if I had cared too much about “vibes” or how it made the reader feel. (Many of the things I have written definitely did make a lot of readers feel bad. And they should have. There is something wrong with you if you can read, say, ‘Scaling Hypothesis’ and not feel bad. I myself regularly feel bad about it! But that’s not a bad thing.) Even my Wikipedia editing earned me doxes and death threats.
And this is because (among many other reasons) emotional reactions are inextricably tied up with manipulation, politics, and status—which are the very last things you want in a site dedicated to speculative discussion and far-out unpopular ideas, which will definitionally be ‘creepy’, ‘icky’, ‘cringe’, ‘fringe’, ‘evil’, ‘bad vibes’ etc. (Even the most brutal totalitarian dictatorships concede this when they set up free speech zones and safe spaces like the ‘science cities’.)
Someone once wrote, upon being newly arrived to LW, a good observation of the local culture about how this works:
Many of our ideas and people are (much) higher status than they used to be. It is no surprise people here might care more about status than they used to, in the same way that rich people care more about taxes than poor people.
But they were willing to be status-blind and not prize emotionality, and that is why they could become high-status. And barring the sudden discovery of an infallible oracle, we can continue to expect future high-status things to start off low-status...
This doesn’t feel like it engages with anything I believe. None of the things you listed are things I object to. I don’t object to how you wrote the the Scaling Hypothesis post, I don’t object to the Baby Eaters, I super don’t object to decoupling, and I super extra don’t object to ‘politics is the mind-killer’. The only one I’d even have to think about is Crocker’s Rules, but I don’t think I have an issue with those, either. They’re notably something you opt into.
I claim that Said’s post is bad because it can be rewritten into a post that fulfills the same function but doesn’t feel as offensive.[1] Nothing analogous is true for the Scaling Hypothesis. And it’s not just that you couldn’t rewrite it to be less scary but convey the same ideas; rather the whole comparison in a non-starter because I don’t think that your post on the scaling hypothesis has bad vibes, at all. If memory serves (I didn’t read your post in its entirety back then, but I read some of it and I have some memory of how I reacted), it sparks a kind of “holy shit this is happening and extremely scary ---(.Ó﹏Ò.)” reaction. This is, like, actively good. It’s not in the same category as Said’s comment in any way whatsoever.
I agree that it is better to to not be bothered. My position is not “you should be more influenced by vibes”, it’s something like “in the real world vibes are about 80% of the causal factors behind most people’s comments on LW and about 95% outside of LW, and considering this fact about how brains work in how you write is going to be good, not bad”. In particular, as I described in my latest response to Zack, I claim that the comments that I actually end up leaving on this site are significantly less influenced by vibes than Said’s because recognizing what my brain does allows me to reject it if I want to. Someone who earnestly believes to be vibe-blind while not being vibe-blind at all can’t do that.
This honestly just doesn’t seem related, either. Status-blindness is more specific than vibe-blindness, and even if vibe-blindness were a thing, it wouldn’t contradict anything I’ve argued for.
it is not identical in terms of content, as Zack pointed out, but here I’m using function in the sense of the good thing the post comment achieves, which is to leave a strongly worded and valid criticism of the post. (In actual fact, I think my version is significantly more effective at doing that.)
This description of ‘bad vibes’ vs ‘good vibes’ and what could be ‘be rewritten into a post that fulfills the same function’, is confusing to me because I would have said that that is obviously untrue of Scaling Hypothesis (and as the author, I should hope I would know), and that was why I highlighted it as an example: aside from the bad news being delivered in it, I wrote a lot of it to be deliberately rude and offensive—and those were some of the most effective parts of it! (And also, yes, made people mad at me.) Just because the essay was effective and is now high-status doesn’t change that. It couldn’t’ve been rewritten and achieved the same outcome, because that was much of the point.
(To be clear, my take on all of this is that it is often appropriate to be rude and offensive, and often inappropriate. What has made these discussions so frustrating is that Said continues to insist that no rudeness or offensiveness is present in any of his writing, which makes it impossible to have a conversation about whether the rudeness of offensiveness is appropriate in the relevant context.
Like, yeah, LessWrong has a culture, a lot of which is determined by what things people are rude and offensive towards. One of my jobs as a moderator is to steer where that goes. If someone keeps being rude and offensive towards things I really want to cultivate on the site, I will tell them to stop, or at least provide arguments for why this thing that I do not think is worth scorn, deserves scorn.
But if that person then insists that no rudeness or offensiveness was present in any of their writing, despite an overwhelming fraction of readers reading it as such, then they are either a writer so bad at communication as to not belong on the site, or trying to avoid accountability for the content of their messages, both of which leave little room but to take moderation action that limits their contributions to the site)
When you say that “it is often appropriate to be rude and offensive”, and that LW culture admits of things toward which it is acceptable to be “rude and offensive”, this would seem to imply that the alleged rudeness and offensiveness as such is not the problem with my comments, but rather that the problem is what I am supposedly being rude and offensive towards; and that the alleged “rudeness and offensiveness” would not itself ever be used against me (and that if a moderator tried to claim that “rudeness and offensiveness” is itself punishable regardless of target, or if a user tried to claim that LW norms forbid being rude and offensive, then you’d show up and say “nope, wrong, actually being rude and offensive is fine as long as it’s toward the right things, so kindly withdraw that particular criticism; Said has violated no rules or norms being being rude and offensive as such”). True? Or not?
Yep, though of course there are priors. The thing I am saying is that there are at least some things (and not just an extremely small set of things) that it is OK to be rude towards, not that the average quality/value-produced of rude and non-rude content is the same.
For enforcement efficiency reasons, culture schelling point reasons, and various other reasons, it might still make sense to place something like a burden of proof on the person who claims that in this case rudeness and offensiveness is appropriate, so enforcement for rudeness without justification might still make sense, and my guess is does indeed make sense.
Also, for you in-particular, I have seen the things that you tend to be rude and offensive towards, at least historically, and haven’t been very happy about that, and so the prior is more skewed against that. My guess is I would tell you in-particular that you have a bad track of aiming it well, and so would request additional justification on the marginal case from your side (similar to how we generally treat repeat criminal offenders different from first-time offenders, and often declare whole sets of actions that are otherwise completely legal from their option pool in prevention of future harm).
Ok, cool, I’ll definitely…
… ah. So, less “yep” and more “nope”.
On the other hand, maybe this “burden of proof” business isn’t so bad. Actually, I was just reading your comments on the recent post about eating honey, including this top-level comment where you say that the ideas in the OP “sound approximately insane”, that they’re “so many orders of magnitude away from what sounds reasonable” that you cannot but seriously entertain the notion that said ideas were not motivated by reasonably thinking about the topic, but rather by “social signaling madness where someone is trying to signal commitment to some group standard of dedication”.
I thought that it was a good comment, personally. (Actually, I found basically all your comments on that post to be upvote-worthy.) That comment is currently at 47 karma, so it would seem that there’s more or less a consensus among LW users that it’s a good comment. I did see that you edited the comment (after I’d initially read and upvoted it) to include somewhat of a disclaimer:
Is this the sort of thing that you have in mind, when you talk about burden of proof?
If I include disclaimers like this at the end of all of my comments, does that suffice to solve of all of the problems that you perceive in said comments? (And can I then be as “rude and offensive” as I like? Hypothetically, that is. If I were inclined to be “rude and offensive”.)
Yes-ish, though I doubt we have a shared understanding of what “that sort of thing” is.
No, of course not. As I explained, as moderator and admin I will curate or at least apply heavy pressure on which things receive scorn and rudeness on LW.
A disclaimer is the start of an argument. If the argument is wrong by my lights, you will still get told off. The standard is not “needs to make an argument”, it’s (if anything) “needs to make an argument that I[1] think is good”. Making an argument is not in itself something that does something.
(Not necessarily just me, there are other mods, and a kind of complicated social process that involves many stakeholders that can override me, or I will try to take into account and integrate, but for the sake of conversation we can assume it’s “me”)
Who decides if the argument suffices? You and the other mods, presumably? (EDIT: Confirmed by subsequent edit to parent comment.)
If so, then could you explain how this doesn’t end up amounting to “the LW mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”? Because that’s what it seems like you have to do, in order for your policy to make any sense.
EDIT: Could you expand on “a kind of complicated social process that involves many stakeholders that can override me”? I don’t know what you mean by this.
At the end of the day, I[1] have the keys to the database and the domain, so in some sense anything that leaves me with those keys can be summarized as “the LW mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”.
But of course, that is largely semantic. It is of course not the case that I have or would ever intend to make a list of allowed or forbidden opinions on LessWrong. In contrast, I have mostly procedural models about how LessWrong should function, including the importance of LW as a free marketplace of ideas, a place where contradicting ideas can be discussed and debated, and many other aspects of what will cause the whole LW project to go well. Expanding on all of them would of course far exceed this comment thread.
On the specific topic of which things deserve scorn or ridicule or rudeness, I also find it hard to give a very short summary of what I believe. We have litigated some past disagreements in the space (such as whether people using their moderation tools to ban others from their blogpost should be subject to scorn or ridicule in most cases), which can provide some guidance, though the breadth of things we’ve covered is fairly limited. It is also clear to me that the exact flavor of rudeness and aggressiveness matters quite a bit. I favor straightforward aggression over passive aggression, and have expressed my model that “sneering” as a mental motion is almost never appropriate (though not literally never, as I expanded on).
And on most topics, I simply don’t know yet, and I’ll have to figure it out as it comes up. The space of ways people can be helpfully or unhelpfully judgmental and aggressive is very large and big, and I do not have most of it precomputed. I do have many more principles I could expand on, and would like to do sometime, but this specific comment thread does not seem like the time.
Again, not just me, but also other mods and stakeholders and stuff
It seems clear that your “in some sense” is doing pretty much all the work here.
Compare, again, to Data Secrets Lox: there, I have the keys to the database and the domain (and in the case of DSL, it really is just me, no one else—the domain is just mine, the database is just mine, the server config passwords… everything), and yet I don’t undertake to decide anything at all, because I have gone to great lengths to formally surrender all moderation powers (retaining only the power of deleting outright illegal content). I don’t make the rules; I don’t enforce the rules; I don’t pick the people who make or enforce the rules. (Indeed the moderators—which were chosen via to the system that I put into place—can even temp-ban me, from my own forum, that I own and run and pay for with my own personal money! And they have! And that is as it should be.)
I say this not to suggest that LW should be run the way that DSL is run (that wouldn’t really make sense, or work, or be appropriate), but to point out that obviously there is a spectrum of the degree to which having “the keys to the database and the domain” can, in fact, be meaningfully and accurately talked about as “the … mods have undertaken to unilaterally decide, in advance, what are the correct views on all topics and the correct positions in all arguments”—and you are way, way further along that spectrum than the minimal possible value thereof. In other words, it is completely possible to hold said keys, and yet (compared to how you run LW) not, in any meaningful sense, undertake to unilaterally decide anything w.r.t. correctness of views and positions.
Yes, well… the problem is that this is the central issue in this whole dispute (such as it is). The whole point is that your preferred policies (the ones to which I object) directly and severely damage LW’s ability to be “a free marketplace of ideas, a place where contradicting ideas can be discussed and debated”, and instead constitute you effectively making a list of allowed or forbidden opinions on this forum. Like… that’s pretty much the whole thing, right there. You seem to want to make that list while claiming that you’re not making any such list, and to prevent the marketplace of ideas from happening while claiming that the marketplace of ideas is important. I don’t see how you can square this circle. Your preferred policies seem to be fundamentally at odds with your stated goals.
I don’t see where I am making any such list, unless you mean “list” in a weird way that doesn’t involve any actual lists, or even things that are kind of like lists.
I don’t think that’s an accurate description of DSL, indeed it appears to me that what the de-facto list of the kind of policy you have chosen is is pretty predictable (and IMO does not result in particular good outcomes). Just because you have some other people make the choices doesn’t change the predictability of the actual outcome, or who is responsible for it.
I already made the obvious point that of course, in some sense, I/we will define what is OK on LessWrong via some procedural way. You can dislike the way I/we do it.
There is definitely no “fundamentally at odds”, there is a difference in opinion about what works here, which you and me have already spent hundreds of hours trying to resolve, and we seem unlikely to resolve right now. Just making more comments stating that “I am wrong” in big words will not make that happen faster (or more likely to happen at all).
Seems like we got lost in a tangle of edits. I hope my comment clarifies sufficiently, as it is time for me to sleep, and I am somewhat unlikely to pick up this thread tomorrow.
Sure, I appreciate the clarification, but my last question still stands:
Who are these stakeholders, exactly? How might they override you?
Not going to go into this, since I think it’s actually a pretty complicated situation, but at a very high level some obvious groups that could override me:
The Lightcone Infrastructure board (me, Vaniver, Daniel Kokotajlo)
If Eliezer really wanted, he can probably override me
A more distributed consensus among what one might consider the leadership of the rationality community (like, let’s say Scott Alexander and Ryan Greenblatt and Buck and Nate and John Wentworth and Gwern all roughly agree on me messing up really badly)
There would be lots more to say on this topic, but as I said, I am unlikely to pick this thread up again, so I hope that’s good enough!
(This is a tangent to the thread and so I don’t plan to reply further on this, but I just wanted to mention that while I view Greenblatt and Shlegeris as stakeholders in LessWrong, a space they’ve made many great contributions to and are quite active in, I don’t view them as leadership of the rationality community.)
Rudeness and offensiveness are, in the general case, two-place functions: text can be offensive to some particular reader, but short of unambiguous blatant insults, there’s not going to be a consensus about what is “offensive”, because people vary widely (both by personal disposition and vagarious incentives) in how easy they are to offend.
When it is denied that Achmiz’s comments are offensive, the claim isn’t that no one is offended. (That would be silly. We have public testimony from people who are offended!) The claim is that the text isn’t rude in a “one-place” sense (no personal insults, &c.).
The reason that “one-place” rudeness is the relevant standard is because it would be bad if a fraction of easily-offended readers (even a substantial fraction—I don’t think you can defend the adjective “overwhelming”) could weaponize their emotions to censor expressions of ideas that they don’t like.
For example, take Achmiz’s January 2020 comment claiming that, “There is always an obligation by any author to respond to anyone’s comment along these lines. If no response is provided to (what ought rightly to be) simple requests for clarification [...] the author should be interpreted as ignorant.”
The comment is expressing an opinion about discourse norms (“There is always an obligation”) and a belief about what Bayesian inferences are warranted by the absence of replies to a question (“the author should be interpreted as ignorant”). It makes sense that many people disagree with that opinion and that belief (say, because they think that some of the questions that Achmiz thinks are good, are actually bad, and that ignoring bad questions is good). Fine.
But beyond mere disagreement, to characterize such a comment as offensive (because it criticizes people who don’t respond to questions), is something I find offensive. (If you’re thinking of allegedly worse behavior from Achmiz than this January 2020 comment, you’re going to need to provide the example.) Sometimes people who use the same website as you have opinions or beliefs that imply that they disapprove of your behavior! So what? I think grown-ups should be able to shrug this off without calling for draconian and deranged censorship policies. The mod team should not be pandering to such pathetic cry-bullying.
The comment is offensive because it communicates things other than its literal words. Autistically taking it apart word by word and saying that it only offends because it is criticism ignores this implicit communication.
Gwern himself refers to the “rude and offensive” part in this subthread as a one-place function:
I have no interest in doing more hand-wringing about whether Said’s comments are intended to make people feel judged or not, and don’t find your distinction of “no personal insults” as somehow making the rudeness more objective compelling. If you want we can talk about the Gwern hypothetical in which he clearly intended to be rude and offensive towards other people.
This is indeed a form of aggression and scorn that I do not approve of on this site, especially after extensive litigation.
I’ll leave it on this thread, but as a concrete example for the sake of setting clear guidelines, strawmanning all (or really any) authors who have preferences about people not being super aggro in their comment threads as “pathetic cry-bullying” and “calling for draconian and deranged censorship policies” is indeed one of the things that will get you banned from this site on other threads! You have been warned!
I don’t think the relevant dispute about rudeness/offensiveness is about one-place and two-place functions, I think it’s about passive/overt aggression. With passive aggression you often have to read more of the surrounding context to understand what is being communicated, whereas with overt aggression it’s clear if you just locally inspect the statement (or behavior), which sounds like one / two place functions (because ppl with different information states look at the same message and get different assessments), but isn’t.
For instance, suppose Alice doesn’t invite Bob to a party, and then Bob responds by ignoring all of Alice’s texts and avoiding eye contact most of the time. Now any single instance of “not responding to a text” isn’t aggression, but from the context of a chance in a relationship where it was typical to reply same-day, to zero replies, it can be understood as retalliation. And of course, even then it’s not provable, there are other possible explanations (such as Bob is taking a GLP-1 inhibitor and is quite low-energy at the minute don’t think too hard about why I picked that example), which makes it a great avenue for hard-to-litigate retaliation.
Does everyone here remember and/or agree with my point in The Nature of Offense, that offense is about status, which in the current context implies that it’s essentially impossible to avoid giving offense while delivering strong criticism (as it almost necessarily implies that the target of criticism deserves lower status for writing something seriously flawed, having false/harmful beliefs, etc.)? @habryka @Zack_M_Davis @Said Achmiz
This discussion has become very long and I’ve been travelling so I may have missed something, but has anyone managed to write a version of Said’s comment that delivers the same strength of criticism while avoiding offending its target? (Given the above, I think this would be impossible.)
Not a direct response, but I want to take some point in this discussion (I think I said this to Zack in-person the other day) to say that, while some people are arguing that things should as a rule be collaborative and not offensive (e.g. to varying extents Gordon and Rafael), this is not the position that the LW mods are arguing for. We’re arguing that authors on LessWrong should be able to moderate their posts with different norms/standards from one another, and that there should not reliably be retribution or counter-punishment by other commenters for them moderating in that way.
I could see it being confusing because sometimes an author like Gordon is moderating you, and sometimes a site-mod like Habryka is moderating you, but they are using different standards, and the LW-mods are not typically endorsing the author standards as our own. I even generally agree with many of the counterarguments that e.g. Zack makes against those norms being the best ones. Some of my favorite comments on this site are offensive (where ‘offensive’ is referring to Wei’s meaning of ‘lowering someone’s social status’).
What is currently the acceptable range of moderation norms/standards (according to the LW mod team)? For example if someone blatantly deletes/bans their most effective critics, is that acceptable? What if they instead subtly discourage critics (while being overtly neutral/welcoming) by selectively enforcing rules more stringently against their critics? What if they simply ban all “offensive” content, which as a side effect discourages critics (since as I mentioned earlier, criticism almostly inescapably implies offense)?
And what does “retribution or counter-punishment” mean? If I see an author doing one of the above, and question or criticize that in the comments or elsewhere, is that considered “retribution or counter-punishment” given that my comment/post is also inescapably offensive (status-lowering) toward the author?
I think the first answer is “Mostly people aren’t using this feature, and the few times people have used it it has not felt to us like abuse or strongly needing to be pushed back on” so I don’t have any examples to point to.
But I’ll quickly generate thoughts on each of the hypothetical scenarios you briefly gestured to.
It’d depend on how things played out. If Andrew writes a blogpost with a big new theory of rationality, and then Bob and Charlie and Dave all write decisive critiques and then their comments are deleted and banned from commenting on his posts, I think it’s quite plausible that they’ll write a new post together with the copy-paste of their comments and it’ll get more karma than the original. This seems like a good-enough outcome to me. On the other hand if Andrew only gets criticism from Bob, and then deletes Bob’s comments and bans him from commenting on his posts, and then Bob leaves the site, I would take more active action, such as perhaps removing Andrew’s ability to ban people, and reaching out to Bob to thank him for his comments and encourage him to return.
That sounds like there’d be some increased friction on criticism. Hopefully we’d try to notice it and counteract it, or hopefully the commenters who were having annoying experience being moderated would notice and move to shortform or posts and do their criticism from there. But plausibly there’d just be some persistent additional annoyances or costs that certain users would have to pay.
I mean, again, probably this would just be very incongruous with LessWrong and it wouldn’t really work and they’d have to ban like 30+ users because everyone wouldn’t get this and would keep doing things the author didn’t like, and the author wouldn’t eventually leave if they needed that sort of environment, or we’d step in after like 5 and say “this is kind of crazy, you have to stop doing this, it isn’t going to work out, we’re removing your ability to ban users”. So many of the good comments on LessWrong lower their interlocutor’s status in some way.
It means actions that predictably make the author feel that them using the ban feature in general is illegitimate or that using it will cause them to have their reputation attacked, regardless of reason or context, in response to them using the ban feature.
Many many writers on LessWrong are capable of critiquing a single instance of a ban while taking care to communicate that they are not pushing back on all instances of banning, and can also credibly offer support in other instances that are more reasonable.
Generally it is harder to signal this when you are complaining about your own banning. For in-person contexts (e.g. events) I generally spend effort to ensure that people do not feel any cost for not inviting me to events or spaces, and not expect that I will complain loudly or cause them to lose social status for it, and a similar (but not identical) heuristic applies here. If someone finds interacting with you very unpleasant and you don’t understand quite why, it’s often bad form to loudly complain about it every time they don’t want to interact with you any more, even if you have an uncharitable hypothesis as to why.
There is still good form and bad form to imposing costs on people for moderating their spaces, and costs imposed on people for moderating their spaces (based on disagreement or even trying to fix biases in the moderation) are the most common reason for good spaces not existing; moderation is unpleasant work, lots of people feel entitled to make strong social bids on you for your time and to threaten to attack your social standing, and I’ve seen many spaces degrade due to unwillingness to moderate. You should of course think about this if you are considering reliably complaining loudly every time anyone uses a ban feature on people.
Added: I hope you get a sense from reading this that your questions don’t have simple answers, but that the scenarios you describe require active steering depending on the dynamics at play. I am somewhat wary you will keep asking me a lot of short questions that, due to your inexperience moderating spaces, you will assume have simple answers, and I will have to do lots of work generating all the contexts to show how things play out, else Said or someone allied with him against him being moderated on LW will claim I am unable to answer the most basic of questions and this shows me to be either ignorant or incompetent. And, man, this is a lot of moderation discussion.
If I was in this circumstance, I would be pretty worried about my own biases, and ask neutral or potentially less biased parties whether there might be more charitable and reasonable hypotheses why that person doesn’t want to interact with me. If there isn’t though, why shouldn’t I complain and e.g. make it common knowledge that my valuable criticism is being suppressed? (Obviously I would also take into consideration social/political realities, not make enemies I can’t afford to make, etc.)
But most people aren’t using this feature, so to the extent that LW hasn’t degraded (and that’s due to moderation), isn’t it mainly because of the site moderators and karma voters? The benefits of having a few people occasionally moderate their own spaces hardly seems worth the cost (to potential critics and people like me who really value criticism) of not knowing when their critiques might be unilaterally deleted or banned by post authors. I mean aside from the “benefit” of attracting/retaining the authors who demand such unilateral powers.
Aside from the above “benefit”, It seems like you’re currently getting the worst of both worlds: lack of significant usage and therefore potential positive effects, and lots of controversy when it is occasionally used. If you really thought this was an important feature for the long term health of the community, wouldn’t you do something to make it more popular? (Or have done it in the past 7 years since the feature came out?) But instead you (the mod team) seem content that few people use it, only coming out to defend the feature when people explicitly object to it. This only seems to make sense if the main motivation is again to attract/retain certain authors.
It seems like if you actually wanted or expected many people to use this feature, you would have written some guidelines on what people can and can’t do, or under what circumstances their moderation actions might be reversed by the site moderators. I don’t think I was expecting the answers to my questions to necessarily be simple, but rather that the answers already exist somewhere, at least in the form of general guidelines that might need to be interpreted to answer my specific questions.
I mean, mostly we’ve decided to give the people who complain about moderation a shot, and compensate by spending much much more moderation effort from the moderators. My guess is this has cost a large amount of counterfactual quality of the site, many contributors, etc.
In-general, I find argument of the form “so to the extend that LW hasn’t been destroyed, X can’t be that valuable” pretty weak. It’s very hard to assess the counterfactual, and “if not X, LessWrong would have been completely destroyed” is rarely the case for almost any X that is in dispute.
My guess is LW would be a lot better if more people felt comfortable moderating things, and in the present world, there are a lot of costs born by the site admins that wouldn’t be necessary otherwise.
What do you mean by this? Until I read this sentence, I saw you as giving the people who demand unilateral moderation powers a shot, and denying the requests of people like me to reduce such powers.
My not very confident guess at this point is that if it weren’t for people like me, you would have pushed harder for people to moderate their own spaces more, perhaps by trying to publicly encourage this? And why did you decide to go against your own judgment on it, given that “people who complain about moderation” have no particular powers, except the power of persuasion (we’re not even threatening to leave the site!), and it seems like you were never persuaded?
This seems implausible to me given my understanding of human nature (most people really hate to see/hear criticism) and history (few people can resist the temptation to shut down their critics when given the power and social license or cover to do so). If you want a taste of this, try asking DeepSeek some questions about the CCP.
But presumably you also know this (at least abstractly, but perhaps not as viscerally as I do, coming from a Chinese background, where even before the CCP, criticism in many situations was culturally/socially impossible), so I’m confused and curious why you believe what you do.
My guess is that you see a constant stream of bad comments, and wish you could outsource the burden of filtering them to post authors (or combine efforts to do more filtering). But as an occasional post author, my experience is that I’m not a reliable judge of what counts as a “bad comment”, e.g., I’m liable to view a critique as a low quality comment, only to change my mind later after seeing it upvoted and trying harder to understand/appreciate its point. Given this, I’m much more inclined to leave the moderation to the karma system, which seems to work well enough in leaving bad comments at low karma/visibility by not upvoting them, and even when it’s occasionally wrong, still provides a useful signal to me that many people share the same misunderstanding and it’s worth my time to try to correct (or maybe by engaging with it I find out that I still misjudged it).
But if you don’t think it works well enough… hmm I recall writing a post about moderation tech proposals in 2016 and maybe there has been newer ideas since then?
I mean, I have written like 50,000+ words about this at this point in various comment threads. About why I care about archipelagos, and why I think it’s hard and bad to try to have centralized control about culture, about how much people hate being in places with ambiguous norms, and many other things. I don’t fault you for not reading them all, but I have done a huge amount of exposition.
Because the only choice at this point would be to ban them, since they appear to be willing to take any remaining channel or any remaining opportunity to heap approximately as much scorn and snark and social punishment on anyone daring to do moderation they disagree with, and I value things like readthesequences.com and many other contributions from the relevant people enough that that seemed really costly and sad.
My guess is I will now do this, as it seems like the site doesn’t really have any other choice, and I am tired and have better things to do, but I think I was justified and right to be hesitant to do this for a while (though yes, in ex-post it would have obviously been better to just do that 5 years ago).
It seems to me there are plenty of options aside from centralized control and giving authors unilateral powers, and last I remember (i.e., at the end of this post) the mod team seems to be pivoting to other possibilities, some of which I would find much more reasonable/acceptable. I’m confused why you’re now so focused again on the model of authors-as-unilateral-moderators. Where have you explained this?
I have filled my interest in answering questions on this, so I’ll bow out and wish you good luck. Happy to chat some other time.
I don’t think we ever “pivoted to other possibilities” (Ray often makes posts with moderation things he is thinking about, and the post doesn’t say anything about pivoting). Digging up the exact comments on why ultimately there needs to be at least some authority vested in authors as moderators seems like it would take a while.
I meant pivot in the sense of “this doesn’t seem to be working well, we should seriously consider other possibilities” not “we’re definitely switching to a new moderation model”, but I now get that you disagree with Ray even about this.
Your comment under Ray’s post wrote:
This made me think you were also no longer very focused on the authors-as-unilateral-moderators model and was thinking more about subreddit-like models that Ray mentioned in his post.
BTW I’ve been thinking for a while that LW needs a better search, as I’ve also often been in the position being unable to find some comment I’ve written in the past.
Instead of one-on-one chats (or in addition to them), I think you should collect/organize your thoughts in a post or sequence, for a number of reasons including that you seem visibly frustrated that after having written 50k+ words on the topic, people like me still don’t know your reasons for preferring your solution.
Huh, ironically I now consider the AI Alignment Forum a pretty big mistake in how it’s structured (for reasons mostly orthogonal but not unrelated to this).
Agree.
I think I have elaborated non-trivially on my reasons in this thread, so I don’t really think it’s an issue of people not finding it.
I do still agree it would be good to do more sequences-like writing on it, though like, we are already speaking in the context of Ray having done that a bunch (referencing things like the Archipelago vision), and writing top-level content takes a lot of time and effort.
It’s largely an issue of lack of organization and conciseness (50k+ words is a minus, not a plus in my view), but also clearly an issue of “not finding it”, given that you couldn’t find an important comment of your own, one that (judging from your description of it) contains a core argument needed to understand your current insistence on authors-as-unilateral-moderators.
I’m having a hard time seeing how this reply is hooking up to what I wrote. I didn’t say critics, I spoke much more generally. If someone wants to keep their distance from you because you have bad body odor, or because they think your job is unethical, and you either don’t know this or disagree, it’s pretty bad social form to go around loudly complaining every time they keep their distance from you. It makes it more socially costly for them to act in accordance with their preferences and makes a bunch of unnecessary social conflict. I’m pretty sure this is obvious and this doesn’t change if you’ve suddenly developed a ‘criticism’ of them.
I mean, I think it pretty plausible that LW would be doing even better than it is with more people doing more gardening and making more moderated spaces within it, archipelago-style.
I read you questioning my honesty and motivations a bunch (e.g. you have a few times mentioning that I probably only care about this because of status reasons I cannot mention or to attract certain authors and that my behavior is not consistent with believing in users moderating their own posts being a good idea) which are of course fine hypotheses for you to consider. After spending probably over 40 hours writing this month explaining why I think authors moderating their posts is a good idea and making some defense of myself and my reasoning, I think I’ve done my duty in showing up to engage with this semi-prosecution for the time being, and will let ppl come to their own conclusions. (Perhaps I will write up a summary of the discussion at some point.)
Great, so all you need to do is make a rule specifying what speech constitutes “retribution” or “counterpunishment” that you want to censor on those grounds.
Maybe the rule could be something like, “No complaining about being banned by a specific user (but commenting on your own shortform strictly about the substance of a post that you’ve been banned from does not itself constitute complaining about the ban)” or “No arguing against the existence on the user ban feature except in designated moderation threads (which get algorithmically deprioritized in the new Feed).”
It’s your website! You have all the hard power! You can use the hard power to make the rules you want, and then the users of the website have a clear choice to either obey the rules or be banned from the site. Fine.
What I find hard to understand is why the mod team seems to think it’s good for them to try to shape culture by means other than clear and explicit rules that could be neutrally enforced. Telling people to “stop optimizing in a fairly deep way” is not a rule because of how vague and potentially all-encompassing it is. Telling people to avoid “mak[ing] people feel judged or not” is not a rule because I don’t have control over how other people feel.
“Don’t tell people ‘I’m judging you about X’” is a rule. I can do that.
What I can’t do is convincingly pretend to be a person with a completely different personality such that people who are smart about subtext can’t even guess from subtle details of my writing style that I might privately be judging them.
I mean, maybe I could if I tried very hard? But I have too much self-respect to try. If the mod team wants to force temperamentally judgemental people to convincingly pretend to be non-judgemental, that seems really crazy.
I know, the mods didn’t say “We want temperamentally judgemental people to convincingly pretend to have a completely different personality” in those words; rather, Habryka said he wanted to “avoid a passive aggressive culture tak[ing] hold”. I just don’t see what the difference is supposed to be in practice.
Mm, I think sometimes I’d rather judge on the standard of whether the outcome is good, rather than exclusively on the rules of behavior.
A key question is: Are authors comfortable using the mod tools the site gives them to garden their posts?
You can write lots of judgmental comments criticizing an author’s posts, and then they can ban you from their comments because they find engaging with you to be exhausting, and then you can make a shortform where you and your friends call them a coward, and then they stop using the mod tools (and other authors do too) out of a fear that using the mod tools will result in a group of people getting together to bully and call them names in front of the author’s peers. That’s a situation where authors become uncomfortable using their mod tools. But I don’t know precisely what comment was wrong and what was wrong with it such that had it not happened the outcome would counterfactually not have obtained i.e. that you wouldn’t have found some other way to make the author uncomfortable using his mod tools (though we could probably all agree on some schelling lines).
Also I am hesitant to fully outlaw behavior that might sometimes be appropriate. Perhaps there are some situations where it’s appropriate to criticize someone on your shortform after they banned you. Or perhaps sometimes you should call someone a coward for not engaging with your criticism.
Overall I believe sometimes I will have to look at the outcome and see whether the gain in this situation was worth the cost, and directly give positive/negative feedback based on that.
Related to other things you wrote, FWIW I think you have a personality that many people would find uncomfortable interacting with a lot. In-person I regularly read you as being deeply pained and barely able to contain strongly emotional and hostile outbursts. I think just trying to ‘follow the rules’ might not succeed at making everyone feel comfortable interacting with you, even via text, if they feel a deep hostility from you to them that is struggling to contain itself with rules like “no explicit insults”, and sometimes the right choice for them will just be to not engage with you directly. So I think it is a hypothesis worth engaging with that you should work to change your personality somewhat.
To be clear I think (as Said has said) that it is worth people learning to be able to make space to engage with people like you who they find uncomfortable, because you raise many good ideas and points (and engaging with you is something I relatively happily do, and this is a way I have grown stronger relative to myself of 10 years ago), and I hope you find more success as I respect many of your contributions, but I think a great many people who have good points to contribute don’t have as much capacity as me to do this, and you will sometimes have to take some responsibility for navigating this.
A key reason to favor behavioral rules over trying to directly optimize outcomes (even granting that enforcement can’t be completely mechanized and there will always be some nonzero element of human judgement) is that act consequentialism doesn’t interact well with game theory, particularly when one of the consequences involved is people’s feelings.
If the popular kids in the cool kids’ club don’t like Goldstein and your only goal is to make sure that the popular kids feel comfortable, then clearly your optimal policy is to kick Goldstein out of the club. But if you have some other goal that you’re trying to pursue with the club that the popular kids and Goldstein both have a stake in, then I think you do have to try to evaluate whether Goldstein “did anything wrong”, rather than just checking that everyone feels comfortable. Just ensuring that everyone feels comfortable at all costs, without regard to the reasons why people feel uncomfortable or any notion that some reasons aren’t legitimate grounds for intervention, amounts to relinquishing all control to anyone who feels uncomfortable when someone else doesn’t behave exactly how they want.
Something I appreciate about the existing user ban functionality is that it is a rule-based mechanism. I have been persuaded by Achmiz and Dai’s arguments that it’s bad for our collective understanding that user bans prevent criticism, but at least it’s a procedurally “fair” kind of badness that I can tolerate, not completely arbitrary tyranny. The impartiality really helps. Do you really want to throw away that scrap of legitimacy in the name of optimizing outcomes even harder? Why?
But I’m not trying to make everyone feel comfortable interacting with me. I’m trying to achieve shared maps that reflect the territory.
A big part of the reason some of my recent comments in this thread appeal to an inability or justified disinclination to convincingly pretend to not be judgmental is because your boss seems to disregard with prejudice Achmiz’s denials that his comments are “intended to make people feel judged”. In response to that, I’m “biting the bullet”: saying, okay, let’s grant that a commenter is judging someone; to what lengths must they go to conceal that, in order to prevent others from predictably feeling judged, given that people aren’t idiots and can read subtext?
I think there’s something much more fundamental at stake here, which is that an intellectual forum that’s being held hostage to people’s feelings is intrinsically hampered and can’t be at the forefront of advancing the art of human rationality. If my post claims X, and a commenter says, “No, that’s wrong, actually not-X because Y”, it would be a non-sequitur for me to reply, “I’d prefer you engage with what I wrote with more curiosity and kindness.” Curiosity and kindness are just not logically relevant to the claim! (If I think the commenter has misconstrued what I wrote, I could just say that.) It needs to be possible to discuss ideas without getting tone-policed to death. Once you start playing this game of litigating feelings and feelings about other people’s feelings, there’s no end to it. The only stable Schelling point that doesn’t immediately dissolve into endless total war is to have rules and for everyone to take responsibility for their own feelings within the rules.
I don’t think this is an unrealistic superhumanly high standard. As you’ve noticed, I am myself a pretty emotional person and tend to wear my heart on my sleeve. There are definitely times as recently as, um, yesterday, when I procrastinate checking this website because I’m scared that someone will have said something that will make me upset. In that sense, I think I do have some empathy for people who say that bad comments make them less likely to use the website. It’s just that, ultimately, I think that my sensitivity and vulnerability is my problem. Censoring voices that other people are interested in hearing would be making it everyone else’s problem.
An intellectual forum that is not being “held hostage” to people’s feelings will instead be overrun by hostile actors who either are in it just to hurt people’s feelings, or who want to win through hurting people’s feelings.
Some sensitivity is your problem. Some sensitivity is the “problem” of being human and not reacting like Spock. It is unreasonable to treat all sensitivity as being the problem of the sensitive person.
This made my blood go cold, despite thinking it would be good if Said left LessWrong.
My first thoughts when I read “judge on the standard of whether the outcome is good” is that this lets you cherrypick your favorite outcomes without justifying them. My second is that it knowing if something is good can be very complicated even after the fact, so predicting it ahead of time is challenging even if you are perfectly neutral.
I think it’s good LessWrong(’s admins) allows authors to moderate their own posts (and I’ve used that to ban Said from my own posts). I think it’s good LessWrong mostly doesn’t allow explicit insults (and wish this was applied more strongly). I think it’s good LessWrong evaluates commenting patterns, not just individual comments. But “nothing that makes authors feel bad about bans” is way too far.
It’s extremely common for all judicial systems to rely on outcome assessments instead of process assessments! In many domains this is obviously the right standard! It is very common to create environments where someone can sue for damages and not just have the judgement be dependent on negligence (and both thresholds are indeed commonly relevant for almost any civil case).
Like sure, it comes with various issues, but it seems obviously wrong to me to request that no part of the LessWrong moderation process relies on outcome assessments.
Okay. But I nonetheless believe it’s necessary that we have to judge communication sometimes by outcomes rather than by process.
Like, as a lower stakes examples, sometimes you try to teasingly make a joke at your friend’s expense, but they just find it mean, and you take responsibility for that and apologize. Just because you thought you were behaving right and communicating well doesn’t mean you were, and sometimes you accept feedback from others that says you misjudged a situation. I don’t have all the rules written down such that if you follow them your friend will read your comments as intended, sometimes I just have to check.
Similarly sometimes you try to criticize an author, but they take it as implying you’ll push back whenever they enforce boundaries on LessWrong, and then you apologize and clarify that you do respect them enforcing boundaries in general but stand by the local criticism. (Or you don’t and then site-mods step in.) I don’t have all the rules written down such that if you follow them the author will read your comments as intended, sometimes I just have to check.
Obviously mod powers can be abused, and having to determine on a case by case basis is a power that can be abused. Obviously it involves judgment calls. I did not disclaim this, I’m happy for anyone to point it out, perhaps nobody has mentioned it so far in this thread so it’s worth making sure the consideration is mentioned. And yeah, if you’re asking, I don’t endorse “nothing that makes authors feel bad about bans”, and there are definitely situations where I think it would be appropriate for us to reverse someone’s bans (e.g. if someone banned all of the top 20 authors in the LW review, I would probably think this is just not workable on LW and reverse that).
Sure, but “is my friend upset” is very different than “is the sum total of all the positive and negative effects of this, from first order until infinite order, positive”
I don’t really know what we’re talking about right now.
Said, you reacted to this:
with “Disagree”.
I have no idea how you could remotely know whether this is true, as I think you have never interacted with either Ben or Zack in person!
Also, it’s really extremely obviously true. Indeed, Zack frequently has the corresponding emotional and hostile outbursts, so it’s really extremely evident they are barely contained during a lot of it (since sometimes they do not end up contained, and then Zack apologizes for containing them and explains that this is difficult for him).
Here’s what confuses me about this stance: do an author’s posts on Less Wrong (especially non-frontpage posts) constitute “the author’s private space”, or do they constitute “public space”?
If the former, then the idea that things that Alice writes about Bob on her shortform (or in non-frontpage posts) can constitute “bullying”, or are taking place “in front of” third parties (who aren’t making the deliberate choice to go to Alice’s private space), is nonsense.
If the latter, then the idea that authors should have the right to moderate discussions that are happening in a public space is clearly inappropriate.
I understood the LW mods’ position to be the former—that an author’s posts are their own private space, within the LW ecosystem (which is why it makes sense to let them set their own separate moderation policy there). But then I can’t make any sense of this notion of “bullying”, as applied to comments written on an author’s shortform (or non-frontpage posts).
It seems to me that these two ideas are incompatible.
No judicial system in the world has ever arrived at the ability to have “neutrally enforced rules”, at least the way I interpret you to mean this. Case law is the standard in almost every legal tradition, and the US legal system relies heavily on things like “jury of your peers” type stuff to make judgements.
Intent frequently matters in legal decision. Cognitive state of mind matters for legal decisions. Judges go through years of training and are part of a long lineage of people who have built up various heuristics and principles about how to judge cases. Individual courts have their own culture and track record.
And that is for the US legal system, which is absolutely not capable of operating remotely to the kind of standard that allows people to curate social spaces or deal with tricky kinds of social rulings. No company could make cultural or hiring or business decisions based on the standard of the US legal system. Neither could any internet forum.
There is absolutely no chance we will ever be able to encodify LessWrong rules of conduct into a set of specific rules that can be neutrally judged by a third party. Zero chance. Give up. If that is something you need here, leave now. Feel free to try to build it for yourself.
It’s not just confusing sometimes, it’s confusing basically all the time. It’s confusing even for me, even though I’ve spent all these years on Less Wrong, and have been involved in all of these discussions, and have worked on GreaterWrong, and have spent time thinking about moderation policies, etc., etc. For someone who is even a bit less “very on LW”[1]—it’s basically incomprehensible.
I mean, consider: whenever I comment on anything anywhere, on this website, I have to not only keep in mind the rules of LW (which I don’t actually know, because I can’t remember in what obscure, linked-from-nowhere-easily-findable, long, hard-to-parse post those rules are contained), and the norms of LW (which I understand only very vaguely, because they remain somewhere between “poorly explained” and “totally unexplained”), but also, in addition to those things, I have to keep in mind whose post I am commenting under, and somehow figure out from that not only what their stated “moderation policy” is (scare quotes because usually it’s not really a specification of a policy, it’s just sort of a vague allusion at a broad class of approaches to moderation policy), but also what their actual preferences are, and how they enforce those things.
(I mean, take this recent post. The “moderation policy” a.k.a. “commenting guidelines” are: “Reign of Terror—I delete anything I judge to be counterproductive”. What is that? That’s not anything. What is Nate going to judge to be “counterproductive”? I have no idea. How will this “policy” be applied? I have no idea. Does anyone besides Nate himself know how he’s going to moderate the comments on his posts? Probably not. Does Nate himself even know? Well, maybe he does, I don’t know the guy; but a priori, there’s a good chance that he doesn’t know. The only way to proceed here is to just assume that he’s going to be reasonable… but it is incredibly demoralizing to invest effort into writing some comments, only for them to be summarily deleted, on the basis of arbitrary rules you weren’t told of beforehand, or “norms” that are totally up to arbitrary interpretation, etc. The result of an environment like that is that people will treat commenting here as strictly a low-effort activity. Why bother to put time and thought into your comments, if “whoops, someone’s opaque whim dictates that your comments are now gone” is a strong possibility?)
The whole thing sort of works most of the time because most people on LW don’t take this “set your own moderation policy” stuff too seriously, and basically (both when posting and when commenting) treat the site as if the rules were something like what you’d find on a lightly moderated “nerdy” mailing list or classic-style discussion forum.
But that just results in the same sorts of “selective enforcement” situations as you get in any real-world legal regime that criminalizes almost everything and enforces almost nothing.
By analogy with “very online”
Yes, of course. I both remember and agree wholeheartedly. (And @habryka’s reply in a sibling comment seems to me to be almost completely non-responsive to this point.)
I think there is something to this, though I think you should not model status in this context as purely one dimensional.
Like a culture of mutual dignity where you maintain some basic level of mutual respect about whether other people deserve to live, or deserve to suffer, seems achievable and my guess is strongly correlated with more reasonable criticism being made.
I think parsing this through the lens of status is reasonably fruitful, and within that lens, as I discussed in other sub threads, the problem is that many bad comments try to make some things low status that I am trying to cultivate on the site, while also trying to avoid accountability and clarity over whether those implications are actually meaningfully shared by the site and its administrators (and no, voting does not magically solve this problem).
The status lens doesn’t super shine light on the passive vs. active aggression distinction we discussed. And again as I said it’s too one dimensional in that people don’t view ideas on LessWrong as having a strict linear status hierarchy. Indeed ideas have lots of gears and criticism does not primarily consist of lowering something’s status, that seems like it gets rid of basically all the real things about criticism.
What are these things? Do you have a post about them?
I’m not sure what things you’re trying to cultivate in particular, but in general, I’m curious whether you’ve given any thought to the idea that the use of moderator power to shape culture is less robust to errors in judgement than trying to shape culture by means of just arguing for your views, for the reasons that Scott Alexander describes in “Guided by the Beauty of Our Weapons”. That is, in Alexander’s terminology, mod power is a “symmetric weapon” that works just as well whether the mods are right or wrong, whereas public arguments are an “asymmetric weapon” that’s more effective when the arguer is correct on the merits.
When I think rationalist culture is getting things wrong (whether that be an object-level belief, or which things are considered high or low status), I write posts arguing for my current views. While I do sometimes worry about whether my current views are mistaken, I don’t worry much about having a large negative impact if it turns out that my views are mistaken, because I think that the means by which I hope to alter the culture has some amount of built-in error correction: if my beliefs or status-assignment-preferences are erroneous in some way that’s not currently clear to me, others who can see the error will argue against my views in the comments, contributing to the result that the culture won’t accept my (ex hypothesi erroneous) proposed changes.
(In case this wasn’t already clear, this is not an argument against moderators ever doing anything. It’s a reason to be extra conservative about controversial and uncertain “culture-shaping” mod actions that would be very costly to get wrong, as contrasted to removing spam or uncontroversially low-value content.)
I have argued a lot for my views! My sense is they are broadly (though not universally) accepted among what I consider the relevant set of core stakeholders for LessWrong.
But beyond that, the core set of stakeholders is also pretty united behind the meta-view that in order for a place like LessWrong to work, you need the culture to be driven by someone with taste, who trusts their own judgements on matters of culture, and you should not expect that you will get consensus on most things.
My sense is there is broad buy-in that under-moderation is a much bigger issue than over-moderation. And also ‘convincing people in the comments’ doesn’t actually like… do anything. You would have to be able to convince every single person who is causing harm to the site, which of course is untenable and unrealistic. At some point, after you explained your reasons, you have to actually enforce the things that you argued for.
See of course the standard Well-Kept Gardens Die By Pacifism:
I have very extensively argued for my moderation principles, and also LessWrong has very extensively argued about the basic premise of Well-Kept Gardens Die By Pacifism. Of course, not everyone agrees, but both of these seem to me to I think create a pretty good asymmetric-weapons case for the things that I am de-facto doing as a head moderator.
The post also ends with a call for people to downvote more, which I also mostly agree with, but also it just seems quite clear that de-facto a voting system is not sufficient to avoid these dynamics.
Sorry, I don’t understand how this is consistent with the Public Archipelago doctrine, which I thought was motivated by different people wanting to have different kinds of discussions? I don’t think healthy cultures are driven by a dictator; I think cultures emerge from the interaction of their diverse members. We don’t all have to have exactly the same taste in order to share a website.
I maintain hope that your taste is compatible with me and my friends and collaborators continuing to be able to use the website under the same rules as everyone else, as we have been doing for fifteen years. I have dedicated much of my adult life to the project of human rationality. (I was at the first Overcoming Bias meetup in February 2008.) If Less Wrong is publicly understood as the single conversational locus for people interested in the project of rationality, but its culture weren’t compatible with me and my friends and collaborators doing the intellectual work we’ve spent our lives doing here, that would be huge problem for my life’s work. I’ve made a lot of life decisions and investments of effort on the assumption that this is my well-kept garden, too; that I am not a “weed.” I trust you understand the seriousness of my position.
Well, it depends on what cultural problem you’re trying to solve, right? If the problem you’re worried about is “Authors have to deal with unwanted comments, and the existing site functionality of user-level bans isn’t quite solving that problem yet, either because people don’t know about the feature or are uncomfortable using it”, you could publicize the feature more and encourage people to use it.
That wouldn’t involve any changes to site policy; it would just be a matter of someone using speech to tell people about already-existing site functionality and thus to organically change the local culture.
It wouldn’t even need to be a moderator: I thought about unilaterally making my own “PSA: You Can Ban Users From Commenting on Your Posts” post, but decided against it, because the post I could honestly write in my own voice wouldn’t be optimal for addressing the problems that I think you perceive.
That is, speaking for myself in my own voice, I have been persuaded by Wei Dai’s arguments that user bans aren’t good because they censor criticism, which results in less accurate shared maps; I think people who use the feature (especially liberally) could be said to be making a rationality mistake. But crucially, that’s just my opinion, my own belief. I’m capable of sharing a website with other people who don’t believe the same things as me. I hope those people feel the same way about me.
My understanding is that you don’t think that popularizing existing site functionality solves the cultural problems you perceive, because you’re worried about users “heap[ing] [...] scorn and snark and social punishment” on e.g. their own shortform. I maintain hope that this class of concern can be addressed somehow, perhaps by appropriately chosen clear rules about what sorts of speech are allowed on the topics of particular user bans or the user ban feature itself.
I think clear rules are important in an Archipelago-type approach for defining how the different islands in the archipelago interact. Attitudes towards things like snark is one of the key dimensions along which I’d expect the islands in an archipelago to vary.
I fear you might find this frustrating, but I’m afraid I still don’t have a good grasp of your conceptualization of what constitutes social punishment. I get the impression that in many cases, what me and my friends and collaborators would consider “sharing one’s honest opinion when it happens to be contextually relevant (including negative opinions, including opinions about people)”, you would consider social punishment. To be clear, it’s not that I’m pretending to be so socially retarded that I literally don’t understand the concept that sharing negative opinions is often intended as a social attack. (I think for many extreme cases, the two of us would agree on characterizing some speech as unambiguously an attack.)
Rather, the concern is that a policy of forbidding speech that could be construed as social punishment would have a chilling effect on speech that is legitimate and necessary towards the site’s mission (particularly if it’s not clear to users how moderators are drawing the category boundary of “social punishment”). I think you can see why this is a serious concern: for example, it would be bad if you were required to pretend that people’s praise of the Trump administration’s AI Action plan was in good faith if you don’t actually think that (because bad faith accusations can be construed as social punishment).
I just want to preserve the status quo where me and my friends and collaborators can keep using the same website we’ve been using for fifteen years under the same terms as everyone else. I think the status quo is fine. You want to get back to work. (Your real work, not whatever this is.) I want to get back to work. I think we can choose to get back to work.
Please don’t strawman me. I said no such thing, or anything that implies such things. Of course not everyone needs to have exactly the same taste to share a website. What I said is that the site needs taste to be properly moderated, which of course does not imply everyone on it needs to share that exact taste. You occupy spaces moderated by people with different tastes from you and the other people within it all the time.
Yep, moderation sucks, competing access needs are real, and not everyone can share the same space, even within a broader archipelago (especially if one is determined to tear down that very archipelago). I do think you probably won’t get what you desire. I am genuinely sorry for this. I wish you good luck.[1]
Look, various commenters on LW including Said have caused much much stronger chilling effects than any moderation policy we have ever created, or will ever create. It is not hard to drive people out of a social space. You just have to be persistent and obnoxious and rules-lawyer every attempt at policing you. It really works with almost perfect reliability.
And of course, nobody at any point was arguing (and indeed I was careful to repeatedly clarify) that all speech that could be construed as social punishment is to be forbidden. Many people will try to socially punish other people. The thing that one needs to reign in to create any kind of functional culture is social punishments of the virtues and values that are good and should be supported and are the lifeblood of the site by my lights.
The absence of moderation does not create some special magical place in which speech can flow freely and truth can be seen clearly. You are welcome to go and share your opinions on 4chan or Facebook or Twitter or any other unmoderated place on the internet if you think that is how this works. You could even start posting on DataSecretLox if you are looking for something with more similar demographics as this place, and a moderation philosophy more akin to your own. The internet is full of places with no censorship, with nothing that should stand in the way of the truth by your lights, and you are free to contribute there.
My models of online platforms say that if you want a place with good discussion the first priority is to optimize its signal-to-noise ratio, and make it be a place that sets the right social incentives. It is not anywhere close to the top priority to worry about every perspective you might be excluding when you are moderating. You are always excluding 99% of all positions. The question is whether you are making any kind of functional discussion space happen at all. The key to doing that is not absence of moderation, it’s presence of functional norms that produce a functional culture, which requires both leading by example and selection and pruning.
I also more broadly have little interest in continuing this thread, so don’t expect further comments from me. Good luck. I expect I’ll write more some other time.
Like, as in, I will probably ban Said.
Well, I agree with all of that except the last three words. Except that it seems to me that the things that you’d need to reign in is the social (and administrative) punishment that you are doing, not anything else.
I’ve been reviewing older discussions lately. I’ve come to the conclusion that the most disruptive effects by far, among all discussions that I’ve been involved with, were created directly and exclusively by the LW moderators, and that if the mods had simply done absolutely nothing at all, most of those disruptions just wouldn’t have happened.
I mean, take this discussion. I asked a simple question about the post. The author of the post (himself an LW mod!), when he got around to answering the question, had absolutely no trouble giving a perfectly coherent and reasonable answer. Neither did he show any signs of perceiving the question to be problematic in any way. And the testimony of multiple other commenters (including from longtime members who had contributed many useful comments over the years) affirmed that my question made sense and was highly relevant to the core point of the post.
The only reason—the only reason!—why a simple question ended up leading to a three-digit-comment-count “meta” discussion about “moderation norms” and so on, was because you started that discussion. You, personally. If you had just done literally nothing at all, it would have been completely fine. A simple question would’ve been asked and then answered. Some productive follow-up discussion would’ve taken place. And that’s all.
Many such cases.
It’s a good thing, then, that nobody in this discussion has called for the “absence of moderation”…
I certainly agree with this.
Thanks Said. As you know, I have little interest in this discussion with you, as we have litigated it many times.
Please don’t respond further to my comments. I am still thinking about this, but I will likely issue you a proper ban in the next few days. You will probably have an opportunity to say some final words if you desire.
Look, this just feels like a kind of crazy catch-22. I weak-downvoted a comment, and answered a question you asked about why someone would downvote your comment. I was not responsible for anything but a small fraction of the relevant votes, nor do I consider any blame to have fallen upon me when honestly explaining my case for a weak-downvote. I did not start anything. You asked a question, I answered it, trying to be helpful in understanding where the votes came from.
It really is extremely predictable that if you ask a question about why a thing was downvoted, that you will get a meta conversation about what is appropriate on the site and what is not.
But again, please, let this rest. Find some other place to be. I am very likely the only moderator for this site that you are going to get, and as you seem to think my moderation is cause for much of your bad experiences, there is little hope in that changing for you. You are not going to change my mind in the 701st hour of comment thread engagement, if you didn’t succeed in the first 700.
Alright—apologies for the long delay, but this response meant I had to reread the Scaling Hypothesis post, and I had some motivation/willpower issues in the last week. But I reread it now.
I agree that the post is deliberately offensive at parts. E.g.:
or (emphasis added)
and probably the most offensive is the ending (wont quote to not clutter the reply, but it’s in Critiquing the Critics, especially from “What should we think about the experts?” onward). You’re essentially accusing all the skeptics of falling victim to a bundle of biases/signaling incentives, rather than disagreeing with you for rational reasons. So you were right, this is deliberately offensive.
But I think the answer to the question—well actually let’s clarify what we’re debating, that might avoid miscommunication. You said this in your initial reply:
So in a nutshell, I think we’re debating something like “will what I advocate mean you’ll be less effective as a writer” or more narrowly “will what I’m advocating for mean you couldn’t have written really valuable past pieces like the Scaling Hypothesis”. To me it still seems like the answer to both is a clear no.
The main thing is, you’re treating my position as if it’s just “always be nice”, which isn’t correct. I’m very utilitarian (about commenting and in general) (one of my main insights from the conversation with Zack is that this is a genuine difference). I’ve argued repeatedly that Said’s comment is ineffective, basically because of what Scott said in How Not to Lose an Argument. It was obviously ineffective at persuading Gordon. Now Said argued that persuading the author isn’t the point, which I can sort of grant, but I think it will be similarly ineffective for anyone sympathetic to religion for the same reasons. So it’s not that I terminally value being nice,[1] it’s that being nice is generally instrumentally useful, and would have been useful in Said’s case. But that doesn’t mean it’s necessarily always useful.
I want to call attention my rephrasing of Said’s post. I still claim that this post would have been much more effective in criticizing Gordon’s post. Gordon would have reacted in more constructive way, and again, I think everyone else who sympathizes with religion is essentially in the same position. This seems to me like a really important point.
So to clarify, I would not have objected to the Scaling Hypothesis post despite some rudeness. The rudeness has a purpose (the bolded sentence is the one that I remembered most from reading it all the way back, which is evidence for your claim that “those were some of the most effective parts”). And the context is also importantly different; you’re not directly replying to a skeptic; the post was likely to be read by lots of people who are undecided. And the fact that it was a super high effort post also matters because ‘how much effort does the other person put into this conversation’ is always one of the important parameters for vibes.
I also wanna point out that your response was contradictory in an important way. (This isn’t meant as a gotcha, I think it capture the difference between “always be nice” and “maximize vibes for impact under the constraint of being honest and not misleading”.) Because you said that you wouldn’t have been successful if you worried about vibes, but also that you made the Scaling Hypothesis post deliberately offensive, which means you did care about vibes, you just didn’t optimize them to be nice in this case.
Idk if this is worth adding, but two days ago I remembered something you wrote that I had mentally tagged as “very rude”, and where following my principles would mean you’re “not allowed” to write that. (So if you think that was important to write in this way, then we have a genuine disagreement.) That was your response to now-anonymous on your Clippy post, here. Here, my take (though I didn’t reread, this is mostly from memory) is something like
the critique didn’t make a lot of sense because it boiled down to “you’re asserting that people would do xyz, but xyz is stupid”, which is a nonseqitor (“people do xyz” and “xyz is stupid” can both be true)
your response was needlessly aggressive and you “lost” the argument in the sense that you failed the persuade the person who complained
it was absolutely possible to write a better reply here; you could have just made the above point (i.e., “it being stupid doesn’t mean it’s unrealistic”) in a friendly tone and the result would probably been that the commenter realizes their mistake; the same is achieved with fewer words and it arguably makes you look better. I don’t see the downside.
Strictly speaking I do terminally value being nice a little bit because I terminally value people feeling good/bad, but I think the ‘improve everyone’s models about the world’ consideration dominates the calculation.
There’s no way this is true.
Not really, no. As you say, you’ve made your position clear. I’m not sure what I could say to convince you otherwise, and that’s not really my goal, anyhow. As far as I’m concerned, what I’m saying is extremely obvious. For example, you write:
And this is obviously, empirically false. The most intellectually productive environments/organizations in the history of the world have been those where you can say stuff like the example comment without concern for censure, and where it’s assumed that nobody will be bothered by it. (Again, see the Philip Greenspun MIT anecdote I cited for one example; but there are many others.)
I think that you are typical-minding very strongly. It seems as if you’re not capable of imagining that someone can fail to perceive the sort of thing we’re discussing as being some sort of social attack. This is causing you to both totally misunderstand my own perspective, and to have a mistaken belief about how “almost everyone on LessWrong” thinks. (I don’t know if you just haven’t spent much time around people of a certain mental make-up, or what.)
I appreciate it! I think this is actually an excellent example of how “vibe protection” is bad, because it prevents us from discussing this sort of thing—which is obviously bad, because it’s central to the disagreement!
I think I’m capable of imagining that someone can fail to perceive this sort of thing. I know this because I did imagine this—when you told me you don’t care, and every comment I had read from you was in the same style, I (perhaps naively) just assumed that you’re telling the truth.
But then you wrote this reply to me, which was significantly friendlier than any other post you’ve written to me. This came directly after I said this
And then also your latest comment (the one I’m replying to) is the least friendly, except for the final paragraph, which is friendly again. So, I when I said did something unusually nice,[1] you were being nice in response. When I was the most rude, in my previous comment, you were the most rude back. Your other comments in this thread that stand out as more nice are those in response to Ben Pace rather than habryka.
… so in summary, you’re obviously just navigating social vibes like a normal person. I was willing to take your words that you’re immune, but not if you’re demonstrating otherwise! (A fun heuristic is just to look at {number of !}/{post length}. There are exceptions, but most of the time, !s soften the vibe.)
clarifying that this was not an intended trap; I just genuinely don’t get why the particular comment asking me to define vibes should get downvoted. (Although I did deliberately not explain why I said I don’t believe you; I wanted to see if you’d ask or just jump to a conlucusion.)
Frankly, I think that you’re mistaking noise for signal here. There’s no “niceness” or “rudeness” going on in these comments, there are just various straightforwardly appropriate responses to various statements / claims / comments / etc.
This is related to what I meant when I wrote:
There’s just no need for this sort of “higher simulacrum level” stuff. Is my comment “nice”? Is it “rude”? No, it’s just saying what I think is true and relevant. If you stop trying to detect “niceness” and “rudeness” in my comments, it’ll be simpler for everyone involved. That’s the benefit of abjuring “vibes”: we can get down to the important stuff.
… on the other hand, maybe everything I just said in the above paragraph is totally wrong, and you should instead try much harder to detect “vibes”:
Do you mean this literally? Because that’s intensely ironic, if so! You see, it’s extremely obvious to me why that comment got downvoted. If I get it, and you don’t, then… what does that say about our respective ability to understand “vibes”, to “navigate social situations”, and generally to understand what’s going on in discussions like this? (No, really—what does it say about those things? That’s not a rhetorical question, and I absolutely cannot predict what your response is going to be.)
I didn’t say I don’t get why it happened; I said, I don’t get why it should happen, meaning I don’t see a reason I agree with, I think the comment is fine. (And if it matters, I never thought about what I think would have happened or why with this comment, so I neither made a true nor a false prediction.)
I see… well, fair enough, I guess. (I find the original wording confusing, FYI, but your explanation does clear things up.)
Separate response because this doesn’t matter for the moderation question (my argument here applies to personal style only) and also because I suspect this will be a much more unpopular take than the other one, so people may disagree-vote in a more targeted way.
The question of, should you optimize personal writing for persuasion-via-vibes is one I’ve thought about a lot, and I think the correct answer is “yes”. Here’s four reasons for why.
One, you can adhere to very high epistemic standard while doing this. You can still only argue for something if you believe it to be true and know why you believe it, and always give the actual reasons for why you believe it. (The State Science Institute article from your post responding to Eliezer’s meta-honesty notably fails this standard.) I’m phrasing this in a careful/weird way because I guess you are in some sense including ‘components’ into your writing that will be persuasive for reasons-that-are-not-the-reasons-why-you-believe-the-thing-you’re-arguing-for, so you’re not only giving those reasons, but you can still always include those reasons. I mean the truth is that when I write, I don’t spend much time explicitly checking whether I obey any specific rules, I just think I have a good intuitive sense of how epistemically pure I’m being. When I said in my comment 4 days ago that optimizing vibes doesn’t require you to lie “at all”, this feeling was the thing upstream of that phrasing. Like, I can write a post such that I have a good feeling about both the post’s vibes and its epistemic purity.
In practice, I just suspect that the result won’t look anything you’d actually take issue with. E.g., my timelines post was like this.. (And fwiw no one has ever accused me of being manipulative in a high-effort post, iirc.)
Two, I don’t think there is a bright line between persuasive vibes and not having anti-persuasive vibes. Say you start off having a writing style that’s actively off-putting and hence anti-persuasive. I think you’re “allowed” to clean that up? But then when do you have to stop?
Three, it’s not practically feasible no not optimize vibes. It is feasible to not deliberately optimize vibes, but if you care about your writing, you’re going to improve it, and that will make the vibes better. Scott Alexander is obviously persuasive in part because he’s a good writer. (I think that’s obvious, anyway.) I think your writing specifically actually has a very distinct vibe, and I think that significantly affects your persuasiveness, and you could certainly do a lot worse as far as the net effect goes, so… yeah, I think it is in fact true to say that you have optimized your vibes to be more persuasive, just not intentionally.
And four, well, if there’s a correlation between having good ideas and having self-imposed norms on how to communicate, which I think there is, then refusing to optimize vibes is shooting yourself/your own team/the propagation of good ideas in the foot. You could easily come up with a toy model where where there are two teams, one optimizes vibes and one doesn’t, and the one who does gradually wins out.
I think right now the situation is basically that ~no one has a good model of how vibes work so people just develop their own vibes and some of them happen to be good for persuasion and some don’t. I’d probably estimate the net effect of this much higher than most people; as I indicated in my comment 4 days ago, I think the idea that most people on LW are not influenced by vibes is just not true at all. (Though it is higher outside LW, which, I mean, that also matters.) Which is kind of a shitty situation.
Like I said, I think this doesn’t have a bearing on the moderation question, but I do think it’s actually a really important point that many people will have to grapple with at some point. Ironically I think the idea of optimizing vibes for persuasion has very ugly vibes (like a yuck factor to it), which I definitely get.
I upvoted this comment but strongly disagree-voted. (This is unusual enough that I mention it.) The following are some scattered notes, not to be taken as a comprehensive reply.
Firstly, I think that your thinking about this subject could stand to be informed a lot more by the “selective” vs. “corrective” vs. “structural” trichotomy.[1] In particular, you essentially ignore selective approaches; but I think that they are of critical importance, and render a large swath of what you say here largely moot.
Second… I must’ve linked to this comment thread by Vladimir_M several dozen times by now, but I think it still hasn’t really “reached conceptual fixation”, so I’m linking it again. I highly recommend reading it in detail (Vladimir_M was one of LW’s most capable and insightful commenters, in the entirety of the site’s history), but the gist is that while a person could claim to be experiencing some negative emotional effect of some other person’s words or actions, could even actually, genuinely be experiencing that negative emotional effect, nevertheless the actual cause of that emotional effect is an unconscious strategic calculation that is based entirely on status dynamics. Change the status dynamics, and—like magic!—the experienced emotional effects will change, or even vanish entirely. This means that taking the emotional effects (which, I repeat, may be entirely “real”, in the sense that they are not consciously falsified) as “brute facts” is a huge mistake, both descriptively and strategically: it simply gets the causation totally wrong, and creates hideously bad incentives.
And that, in turn, means that all of the reasons you give for “coddling”, for attending to “vibes”, etc., are based on a radically mistaken model of interpersonal dynamics; and that rather than improving anything, doing what you suggest is precisely the worst thing that we could be doing. To the extent that we’re doing it already, it’s the source of most of our problems; to the extent that we could be doing it even more, it’s going to cause even worse problems than we already see.
This, for example, seems like a clear case of perception of status dynamics.
Already discussed many times. (I especially note this comment.) What you say here is basically just entirely wrong, in every particular:
It’s not possible to “articulate identical points in a different style”.
If it were possible and if I did it, it would have exactly the same effect.
The trade-off is huge for both the commenter and (even more importantly!) for readers.
Writing more words to express the same idea is bad.
Again, this has all been discussed ad nauseam, and all of the points you cite have been quite thoroughly rebutted, over and over and over. (I don’t mean this as a rebuff to you—there’s no reason you should be expected to have followed these discussions or even to know about them. I am only saying that none of these points are new, and there is absolutely nothing in what you say here that I—and, I expect, Zack as well—haven’t already considered at length.)
And to summarize my response: not only is “caring about vibes” instrumentally very bad, but also, the idea that “caring about vibes” makes people feel better, while “not caring about vibes” makes people feel worse, is just mistaken.
The important things in interacting on a public forum for intellectual discussion are honesty, integrity, and respect for one’s interlocutor as someone who is assumed to be capable of taking responsibility for their own mind and their own behavior. (In other words, a person-interface approach.)
(As usual, none of this is to be taken as an endorsement of vulgarity, insults, name-calling, etc.; the normal standards of basic decency toward other people, as seen in ordinary intellectual society, still apply. The MIT professor from Philip Greenspun’s story probably wasn’t going around calling his students idiots or assholes, and we shouldn’t do such things either.)
I apologize for the self-serving nature of that objection; but then, I did write that post because I find this conceptual distinction to be very often useful, and also very neglected.
(I had not encountered any of the resources you linked, but mostly (I skipped e.g. some child threads in the Vladimir_M thread) read them now, before replying.)
To make sure I understand. Are you saying, “my style of commenting will cause some users to leave the site, and those will primarily be users that are a net negative for the site, so that’s a good thing?”
Assuming that is the argument, I don’t agree that this is an important factor in your favor. Insofar as the unusual property about your commenting style is vibes, it does a worse job at the selection than a nice comment with identical content would do.
(If you’re just arguing about net impact of your comments vs. the counterfactual of you not writing them at all—rather than whether they could be written differently—then I still disagree because I think the ‘driving people away’ effect will be primarily vibe-based on your case, and probably net harmful.)
I read the comment thread before your summary, and this is definitely not what I would have said the gist of the comment thread was that. I’d have said the main point was that, if you have a culture that terminally values psychological harm minimization, this allows for game-theoretical exploits where people either pretend to be hurt or modify themselves to be actually hurt.
Response to your summary: I haven’t asserted any causation. Even if your description is true, it’s unclear how this contradicts my position. (Is it true? Most complicated question we’ve touched so far, imo, big rabbit hole, probably not worth going into. But my model agrees that status dynamics play a gigantic role.)
Response to what I thought the gist was: I agree that exploitation is a big problem. I disagree that this is enough of a reason not to optimize for vibes. I think in practice it’s less of a problem than Vladimir makes it sound, for the particular interventions I suggest (like optimizing vibes for your commenting style and considering it as a factor for moderating decisions) because (a) some people are quite good at seeing whether someone is sincere and are hard to trick, and I think this ability is crucial to be a good mod, and (b) I don’t think it sets particularly bad incentives for self-modification because you don’t actually a get a lot of power from having your feelings hurt, under the culture I’m advocating for.
But, even if it were a bigger problem—even a much bigger problem—I would still not consider it a fatal rebuttal. I view this sort of like saying that having a karma system is bad because it can be exploited. In fact it is exploited all the time, but it’s still a net positive. You don’t just give up on modeling one of the most important factors of how brains work because your system of doing so will be exploited. You optimize anyway and then try to intelligently deal with exploitation as best as you can.
The people in the comment threads you linked didn’t seem to be convinced, so I think a more accurate summary is, “I’ve discussed this several times before, and I think I’m right.”
If you think that this is not worth discussing again and therefore it’s not worth continuing this particular conversation, then I’m fine with that, I don’t think you have any obligation to respond to this part of the comment, or the entire comment. (I wanna point out that I wrote my initial comment to Zack, not to you—though I understand that I mentioned you, which I thought was kind of unavoidable, but I concede that it can be viewed as starting a conversation with you.)
You can probably guess this, but I’m not convinced by your arguments, and I think the first two bullet points are completely false, and the second is mostly false. (I agree with the last one, but changing vibes doesn’t make comments that much longer; my initial comment here was long for specific reasons that don’t generalize.) I used to have a commenting style much closer to yours, and now I don’t, so I know you can in fact dramatically change vibes without changing content or length all that much. It’s difficult to convince me that X isn’t possible when I’ve done X.
(When you say “I have no idea why your proposed alternative version of my comment would be ‘less social-attack-y’” then I believe you, but so what? (I can see immediately why the alternative version is less social-attack-y.) If the argument were “what you’re advocating for is unfair toward people who aren’t as good at understanding vibes”, then I’d take this very seriously, but I won’t reply to that until you’re actually making that argument.)
No.
I am saying that if we have a forum where the attitude and approach that I recommend, then those people will be attracted to the forum who are suited to a forum like that, and those people who are not suited to it, will mostly stay away. This is a much more effective way of building a desirable forum culture than trying to have existing members alter their behavior to “optimize for vibes”.
(Of course this works in reverse, too. The current administration of LW have built the currently active forum culture not by getting people to change their behavior, but by driving away people who find their current approach to be bad, and attracting people who find their current approach to be good.)
This is a moot point given that the assumption doesn’t hold, but I just want to note that there is no such thing as “a nice comment with identical content” (as some purportedly “not-nice” comment). If you say something differently, then you’ve said something different. Presentation cannot be separated from content.
Yeah, you’ve definitely missed the point.
As you say, this is rather a large rabbit hole, but I’ll just note a couple of things:
This is a total, fundamental misunderstanding of the claim. The people who are experiencing the negative emotions in the sorts of cases that Vladimir_M is talking about are sincere! They sincerely, genuinely, un-feignedly feel bad!
It’s just that if the incentives and the status dynamics were different, those people would feel differently.
There is usually nothing conscious about it, and no “tricking” involved.
You get all the power from that, under the culture you’re advocating for. The purported facts about who gets their feelings hurt by what is the motivating principle of the culture you’re advocating for! By your own description, this is a culture of “optimizing for vibes”!
See above. Total misunderstanding of the causation. Your model simply gets things backwards.
Sure they weren’t convinced. What, did you expect replies along the lines of “yeah you’re totally right, after reading what you just wrote there, I hereby totally reverse my view on the matter”? As I’ve written before, that would be a bad idea! It is proper that no such replies were forthcoming, even conditional on my arguments having been completely correct.
But my interlocutors in those discussions also didn’t provide anything remotely resembling coherent or credible counter-arguments, weighty contrary evidence, etc.
(In any case, why rely on others? Suppose they had been convinced—so what? I claim that the points you cite have been thoroughly rebutted. If I am wrong about that, and a hundred people agree with me, then I am still wrong. I didn’t link those comment threads because I thought that everyone agreed with me, I linked them because I consider my arguments there to have been correct. If you disagree, fine and well; but that’s whose opinion matters here, not some other people’s.)
Well, having traded high-level overviews, nothing remains for us at this point but to examine specific examples. If you have such, I’m interested to see them. (That’s as far as the first bullet point goes, i.e. “it’s not possible to ‘articulate identical points in a different style’”.)
As to the second bullet point (“if it were possible and if I did it, it would have exactly the same effect”), I am quite certain about this because I’ve experienced it many times.
Here’s the thing: when someone (who has some stake in the situation) tells you that “it’s not what you said, it’s how you said it”, that is, with almost no exceptions ever, a deliberate attempt to get you to not say that thing at all, in any way. It is a deliberate attempt to impose costs on your ability to say that thing—and if you change the “how”, then they will simply find another thing to criticize in “how”, all the while denying that the problem is with the “what”.
(See this recent discussion for a perfect example. I say critical things directly—I get moderated for it. I don’t say such things directly, I get told that I’m being “passive-aggressive”, that what I wrote is “the same thing even though you successfully avoided saying the literal words”, that it’s “obvious” that I meant the same thing, we have a moderator outright admitting that he reads negative connotations into my comments, etc., etc. We even see a moderator claiming, absurdly, that it would be better if I were to outright call people stupid and evil! How’s that for “vibes optimization”, eh? And what’s the likelihood that “you are stupid and evil” would actually not draw moderator action?)
I’ve seen this play out many, many, many times, and not only with myself as the target. As I’ve mentioned, I do now have some experience running my own discussion forum, with many users, various moderators, various moderation approaches, etc. I have seen this happening to other people, quite often.
When someone whose interests are opposed to yours tells you that “it’s not what you said, it’s how you said it”, the appropriate assumption to make is that they’re lying. The only real question is whether they’re also lying to themselves, or only to you. (Both variants happen often enough that one should not have strong priors either way.)
I’m afraid that you are responding to a strawman of my point.
You quote the first sentence of the linked comment, but of course it was only the first sentence; in the rest of that comment, I go on to say that I do not, in fact, think that the proposed alternative version of my comment would be “less social-attack-y”, and furthermore that I think that neither version of my comment is, or would be, “social-attack-y” at all; but that nevertheless, either version would be equally perceived as being a social attack, by those who expect to benefit from so perceiving it. As I said then:
So this is not a matter of me “not understanding vibes”. It is a matter of you being mistaken about the role that “vibes” play in situations like this.
Note that the person that I’m talking to, in that comment thread—the one who gave the proposed alternate formulation of my comment—then writes, in response (and in partial agreement with) my above-quoted comment:
(This is also what it looks like when a person perceives status dynamics without recognizing this fact.)
Read everything you wrote; I think it’s very unlikely that continuing this would be fruitful, so I won’t.
This doesn’t seem compatible with reality as I understand it. I am not familiar with any example of the latter, and I have seen dozens of instances of the former. I’d appreciate examples[1] illustrating why I’m wrong.
I recognize the irony here
Have you met a user called “aranjaegers” in lesswrong adjacent discord servers? (lesswrong name; @Bernd Clemens Huber ) Infamously banned from 50+ rationalist adjacent servers—either for being rude, spamming wall of text of his arguments(which he improved on eventually), being too pompous in his areas of interests etc . I think his content and focus area are mostly fine, he can be rude here and there, and the walls of texts — which he restricts to other channels if asked for. He’s barely a crackpot who’s plausible deniably not a crackpot but operating from inside view and a bit straightforward in calling what he thinks are stupid or clownish things(Although I personally think he’s rationalising). After other servers banned him the main unofficial lw-cord maintained with extremely light moderation by a single volunteer—who thought aran jaeger was good at scaring away certain types of people—got captured by aran jaegers, the discord got infamous for being a containment chamber for this person, eventually the moderator muted him for a day after an year, because he was being rude to @Kabir Kumar , so he left. (I tracked this situation for multiple months)
Can confirm, was making the server worse—banned him myself, for spam.
Thanks for the example. It’s honestly entertaining and at times hilarious to go through his comment history. It does seem to qualify as spam, though.
That was from before, I convinced him to condense his entire wall of text into 4 premises[1]—I used the analogy of it being a test for finding interested people so that he can expand later with his wall of texts— but that took around 3 hours of back and forth in lw-cord because otherwise it goes in circle. Besides I find him funny too. He still managed to get banned from multiple servers afterwards, so I think it’s just his personality and social skills. It’s possible to nudge him in certain directions, but it takes a lot of effort, his bottom line is kind of set on his cause.
I would summarise it as “evolutionary s-risk due to exponentially increasing contamination by panspermia caused by space exploration” . (He thinks the current organisations monitoring this are dysfunctional)
Other trivia includes, I told him to go attend an EA meetup in munich. He was convinced he will make an impact, but was disappointed that only few people attended, although his impression was mostly positive. (If you know about more meetups or events in munich regarding this particular cause let me know I will forward it to him)
On the lw-cord thing, Kabir kumar, posted an advert for an event he was hosting, with some slogan calling in who’s qualified. Aran basically went on to paraphrase “but I am the most important person and he banned me from his server, so he’s a liar”, the lw-cord mod got mildly annoyed at his rude behavior and muted him.
But he didn’t actually leave because he got muted—he has been muted several times from hundreds of servers—he cited the reason that some other user in discord was obnoxious to him from time to time, this same user was called “clown” by aran when they had an ethical disagreement and renamed his server alias to “The Clown King” to mock aran. He also had a change in heart with that approach, given not many people took his cause as seriously as him on discord. Nowadays he’s in his moral ambition phase, he even enrolled in a mars innovation competition for children and got graded 33⁄45, because his project didn’t innovate anything, he just posted about his ethical cause.
He has been oneshotted by inside view enough times, that he thinks he has access to infohazards which have potential to stop elon musk from launching things into space. For example, his latest public one is that under UN weapons of mass destruction treaty all interplanetary space travel are prohibited and people should be prosecuted for that. [2]
He has a masters in maths,his now deleted reddit account is u/eterniseddragon, he has sent emails regarding his cause to 100k people and organisations(easily searchable on lw-cord)[3], he even has a text file with all the mail ids etc.
I think those are the main highlights anyways.
I have touched grass with long term commitment a month ago and left discord,twitter,reddit in general except for dms and real life work, so I cannot link this here, but you may recognise me, if you have been on there on few of those discords—namely bayesianconspiracy,lesswrong discord—by my multiple usernames and accounts: dogmaticrationalist, militantrationalist, curiousinquirer, averagepcuser, averagediscorduser, RatAnon.
I even trolled a bunch of discord servers by creating a huge list of link in a discord thread on lw-cord, such that aran finds it easier to find servers(although I wasn’t that explicit with my non altruistic motives) but it was funny to watch him go and get banned. Optimised dating server banned him very quickly from what I have heard. In hindsight I apologize for any inconvenience.
I see, thanks.
Am I to take this as a statement of a moderation decision, or merely your personal opinion?
If the former—then, of course, I hear and obey. (However, for the remainder of this comment I’ll assume that it’s the latter.)
No, I don’t think that I’m strawmanning anything. You keep saying this, and then your supposed corrections just restate what I’ve said, except with different valence. For instance:
This seems to be just another way to describe what I wrote in the grandparent, except that your description has the connotation of something fine and reasonable and unproblematic, whereas mine obviously does not.
Of course people have such preferences! Indeed, it’s not shocking at all! People prefer not to have their bad ideas challenged, they prefer not to have obvious gaps in their reasoning pointed out, they prefer that people treat all of their utterances as deserving of nothing less than “curious”, “kind”, “collaborative” replies (rather than pointed questions, direct and un-veiled criticism, and a general “trial by fire”, “explore it by trying to break it” approach)?! Well… yeah. Duh. Humans are human. No one is shocked.
(And people will, if asked, couch these preferences in claims about “bad discourse in plausibly deniable ways”, etc.? Again: duh.)
And I must point out that for all your complaints about strawmanning, you don’t seem to hesitate in doing that very thing to me. In your reply, you write as if I hadn’t included the parenthetical, where I clarify that of course I can understand the mindset in question, if I allow certain unflattering hypotheses into the space of possibilities. You might perhaps imagine reasons why I would be initially reluctant to do this. But that’s only initially. To put it another way, I have a prior against such hypotheses, but it’s not an insuperable one.
So, yes, I understand just fine; I am quite capable of “modeling the preferences” of such people as you mention. No doubt you will reply: “no, actually you don’t, and you aren’t”. But let’s flesh out this argument, proactively. Here’s how it would go, as far as I can tell:
“You are ascribing, to individuals who are clearly honest people of high integrity and strength of character, preferences and motivations which are indicative of the opposite of those traits. Therefore, your characterization cannot be accurate.”
“One man’s modus ponens is another man’s modus tollens. The behavior of the people in question points unambiguously at their possessing the ascribed preferences and motivations (which are hardly improbable a priori, and must be actively fought against even by the best of us, not simply assumed not to be operative). Therefore, perhaps they are not quite so honest, their integrity not so high, and their strength of character not so great.”
I don’t know what exactly you’d then say in response to this—presumably you won’t be convinced, especially since you included yourself in the given set of people. And, to be clear, I don’t think that disagreeing with this argument is evidence of anything; I am certainly not saying anything like “aha, and if you reject this argument that says that you are bad, then that just proves that you are bad!”.
I outline this reasoning only to provide a countervailing model, in response your own argument that I am simply clueless, that I have some sort of inability to understand why people do things and what they want, etc. No, I certainly do have a model of what’s going on here, and it predicts precisely what we in fact observe. You can argue that my model is wrong and yours is right, but that’s what you’ll have to argue—“you lack a model that describes and predicts reality” is an argument that’s available to you in this case.
One of these days, I will probably need to write an essay, which will be titled “‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”. That day will not be today, but here’s a small down payment on that future essay:
When I read that essay, I found it pretty convincing. I didn’t see the problems, the mistakes—because I’d never been a moderator myself, and I’d never run a website.
That has changed now. And now that I have had my own experience of running an online forum (for five years now)—having decisions to make about moderation, having to deal with spam and egregious trolls and subtle trolls and just bad posters and crazy people and all sorts of things—now that I’ve actually had to face, and solve, the problems that Eliezer describes…
… now I can see how dreadfully, terribly wrong that essay is.
(Maybe the problem is that “Well-Kept Gardens Die By Pacifism” was written before Eliezer really started thinking about incentives? Maybe his own frustration with low-quality commenters on Overcoming Bias led him to drastically over-correct when trying to establish principles and norms for the then-new Less Wrong? Maybe he forgot to apply his own advice to his thinking about forum moderation? I don’t know, and a longer and deeper exploration of these musings will have to wait until I write the full version of this post.)
Stiil, I don’t want to give the impression that the essay is completely wrong. Eliezer writes:
Downvoting. He’s talking about downvoting. Not banning! That was a mistake this essay could have included, but didn’t. (Perhaps because Eliezer hadn’t thought of it yet? But I do generally default to thinking well of people whose writings I esteem highly, so that is not my first hypothesis.)
And while the karma system has its own problems (of which I have spoken, a few times), nevertheless it’s a heck of a lot better than letting authors ban whoever they want from their posts.
The fact that it’s nevertheless (apparently) not enough—that the combination of downvotes for the bad-but-not-overtly-bannable, and bans for the overly-bannable, is not enough for some authors—this is not some immutable fact of life. It simply speaks poorly of those authors.
Anyhow:
Of course they do. That’s exactly how they tend to die. It’s precisely the obviously norm-violating content that is the problem, because if you accept that, then your members learn that your moderators either have an egregious inability to tell the good stuff from the bad, or that your moderators simply don’t care. That is deadly. That is when people simply stop trying to be good themselves—and your garden dies.
And there’s also another way in which well-kept gardens tend to die: when the moderators work to prevent the members from maintaining the garden; when “grass-roots” maintenance efforts—done, most commonly, simply with words—are punished, while the offenders are not punished. That is when those members who contribute the most to the garden’s sanctity—those who put in effort to rebut bad arguments, for instance, or otherwise to enforce norms and practices of good discussion and good thinking—will become disgusted with what they perceive as the moderators betraying the garden to its enemies.
This seems to me to be the crux of the issue.
There’s a thing that happens in sports and related disciplines wherein the club separates into two different sections, where there is a competition team and there’s everybody else trying to do the sport and have a good time. There are very sharp differences in mindset between the teams.
In the competition team every little weakness or mistake is brutally hammered out of you, and the people on the team like this. It’s making them stronger and better, they signed up for it. But if a beginner tried to join them, the beginner would just get crushed. They wouldn’t get better, and they would probably leave and say their competitive-minded teammates are being jerks.
Without any beginners though, there is no competition team. The competitors all used to be beginners, and would have gotten crushed in the hyperbaric training chamber of their current team culture.
I think you are trying to push for a competition team, and Habryka is not.
Competition teams are cool! I really like them in their time and place. I think the AI Alignment forum is a little bit like this with their invite-only setup (which is a notable feature of many competition teams).
You need the beginner space though. A place where little babbling half-formed sprouting ideas can grow without being immediately stomped down for being insufficiently rigorous.
Another angle on the same phenomenon: If you notice someone has a faulty foundation in their house of understanding they are building, there are two fundamentally different approaches one could take. You could either:
Be a Fellow Builder, where you point out the mistake in a friendly way (trying not to offend, because you want more houses of understanding built)
Be a Rival Builder, where you crush the house, thereby demonstrating the faulty foundation decisively. (where you only want the best possible houses to even be built at all, so whether that other builder comes back is irrelevant)
I think Habryka is building LessWrong for Fellows, not Rivals.
From the New User’s Guide:
My impression is that you want LessWrong to be a place of competitive truth-seeking, and Habryka is guiding LessWrong towards collaborative truth-seeking.
I think it’s fine to want a space with competitive dynamics. That’s just not what LessWrong is trying to be.
(I do appreciate the attempt at trying to bridge the epistemic gap, but just to be clear, this does not capture the relevant dimensions in my mind. The culture I want on LessWrong is highly competitive in many ways.
I care a lot about having standards and striving in intense ways for the site. I just don’t think the way Said does it really produces that, and instead think it mostly produces lots of people getting angry at each other while exacerbating tribal dynamics.
The situation seems more similar to having a competitive team where anyone gets screamed at for basically any motion, with a coach who doesn’t themselves perform the sport, but just complaints in long tirades any time anyone does anything, making references to methods of practice and training long-outdated, with a constant air of superiority. This is indeed a common error mode for competitive sports teams, but the right response to that is not to not have standards, it’s to have good standards and to most importantly have some functional way of updating the standards.)
So you want a culture of competing with each other while pushing each other up, instead of competing with each other while pushing each other down. Is that a fair (high-level, abstract) summary?
I think there is something in the space, but I wouldn’t speak in absolutes this way. I think many bad things deserve to be pushed down. I just don’t think Said has a great track record of pushing down the right things, and the resulting discussions seem to me to reliably produce misunderstandings and confusions.
I think a major thing that I do not like is “sneering”. Going into the cultural context of sneering and why it happens and how it propagates itself is a bit much for this comment thread, but a lot of what I experience from Said is that kind of sneering culture, which interfaces with having standards, but not in a super clear directional way.
No. This idea was already discussed in the past, and quite definitively rejected. (I don’t have the links to the previous discussions handy, though I’ll try to dig them up when I have some time. But I am definitely not doing anything of the sort.)
What you describe is a reasonable guess at the shape of the disagreement, but I’m afraid that it’s totally wrong.
EDIT: Frankly, I think that the “mystery” has already been solved. All subsequent comments in this vein are, in essence, a smokescreen.
I see the disagreement react, so now I’m thinking maybe LessWrong is trying to be a place where both competitive and collaborative dynamics can coexist, and giving authors the ability to ban users from commenting is part of what makes the collaborators space possible?
Commenting to register my interest: I would like to read this essay. As it stands, “Well-Kept Gardens” seems widely accepted. I can say I have internalized it. It may not have been challenged at any length since the original comment thread. (Please correct me with examples.)