More than half of the authors to this site who have posted more than 10 posts, about you, in particular. Eliezer, Scott Alexander, Jacob Falkovich, Elizabeth Van Nostrand, me, dozens of others. This is not a rare position. I would have to dig to give you an exact list, but the list is not short, and it includes large fractions of almost everyone who one might consider strong contributors to the site.
I see, thanks.
maybe you should just stay out of these conversations
Am I to take this as a statement of a moderation decision, or merely your personal opinion?
If the former—then, of course, I hear and obey. (However, for the remainder of this comment I’ll assume that it’s the latter.)
A good start, if you actually wanted to understand any of this at all, would be to stop strawmaning these people repeatedly by inserting random ellipses and question marks and random snide remarks implying the absurdity of their position.
No, I don’t think that I’m strawmanning anything. You keep saying this, and then your supposed corrections just restate what I’ve said, except with different valence. For instance:
Yes, shockingly, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker!
This seems to be just another way to describe what I wrote in the grandparent, except that your description has the connotation of something fine and reasonable and unproblematic, whereas mine obviously does not.
Of course people have such preferences! Indeed, it’s not shocking at all! People prefer not to have their bad ideas challenged, they prefer not to have obvious gaps in their reasoning pointed out, they prefer that people treat all of their utterances as deserving of nothing less than “curious”, “kind”, “collaborative” replies (rather than pointed questions, direct and un-veiled criticism, and a general “trial by fire”, “explore it by trying to break it” approach)?! Well… yeah. Duh. Humans are human. No one is shocked.
(And people will, if asked, couch these preferences in claims about “bad discourse in plausibly deniable ways”, etc.? Again: duh.)
And I must point out that for all your complaints about strawmanning, you don’t seem to hesitate in doing that very thing to me. In your reply, you write as if I hadn’t included the parenthetical, where I clarify that of course I can understand the mindset in question, if I allow certain unflattering hypotheses into the space of possibilities. You might perhaps imagine reasons why I would be initially reluctant to do this. But that’s only initially. To put it another way, I have a prior against such hypotheses, but it’s not an insuperable one.
So, yes, I understand just fine; I am quite capable of “modeling the preferences” of such people as you mention. No doubt you will reply: “no, actually you don’t, and you aren’t”. But let’s flesh out this argument, proactively. Here’s how it would go, as far as I can tell:
“You are ascribing, to individuals who are clearly honest people of high integrity and strength of character, preferences and motivations which are indicative of the opposite of those traits. Therefore, your characterization cannot be accurate.”
“One man’s modus ponens is another man’s modus tollens. The behavior of the people in question points unambiguously at their possessing the ascribed preferences and motivations (which are hardly improbable a priori, and must be actively fought against even by the best of us, not simply assumed not to be operative). Therefore, perhaps they are not quite so honest, their integrity not so high, and their strength of character not so great.”
I don’t know what exactly you’d then say in response to this—presumably you won’t be convinced, especially since you included yourself in the given set of people. And, to be clear, I don’t think that disagreeing with this argument is evidence of anything; I am certainly not saying anything like “aha, and if you reject this argument that says that you are bad, then that just proves that you are bad!”.
I outline this reasoning only to provide a countervailing model, in response your own argument that I am simply clueless, that I have some sort of inability to understand why people do things and what they want, etc. No, I certainly do have a model of what’s going on here, and it predicts precisely what we in fact observe. You can argue that my model is wrong and yours is right, but that’s what you’ll have to argue—“you lack a model that describes and predicts reality” is an argument that’s available to you in this case.
One of these days, I will probably need to write an essay, which will be titled “‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”. That day will not be today, but here’s a small down payment on that future essay:
When I read that essay, I found it pretty convincing. I didn’t see the problems, the mistakes—because I’d never been a moderator myself, and I’d never run a website.
That has changed now. And now that I have had my own experience of running an online forum (for five years now)—having decisions to make about moderation, having to deal with spam and egregious trolls and subtle trolls and just bad posters and crazy people and all sorts of things—now that I’ve actually had to face, and solve, the problems that Eliezer describes…
… now I can see how dreadfully, terribly wrong that essay is.
(Maybe the problem is that “Well-Kept Gardens Die By Pacifism” was written before Eliezer really started thinking about incentives? Maybe his own frustration with low-quality commenters on Overcoming Bias led him to drastically over-correct when trying to establish principles and norms for the then-new Less Wrong? Maybe he forgot to apply his own advice to his thinking about forum moderation? I don’t know, and a longer and deeper exploration of these musings will have to wait until I write the full version of this post.)
Stiil, I don’t want to give the impression that the essay is completely wrong. Eliezer writes:
I have seen rationalist communities die because they trusted their moderators too little.
But that was not a karma system, actually.
Here—you must trust yourselves.
A certain quote seems appropriate here: “Don’t believe in yourself! Believe that I believe in you!”
Because I really do honestly think that if you want to downvote a comment that seems low-quality… and yet you hesitate, wondering if maybe you’re downvoting just because you disagree with the conclusion or dislike the author… feeling nervous that someone watching you might accuse you of groupthink or echo-chamber-ism or (gasp!) censorship… then nine times of ten, I bet, nine times out of ten at least, it is a comment that really is low-quality.
Downvoting. He’s talking about downvoting. Not banning! That was a mistake this essay could have included, but didn’t. (Perhaps because Eliezer hadn’t thought of it yet? But I do generally default to thinking well of people whose writings I esteem highly, so that is not my first hypothesis.)
And while the karma system has its own problems (of which I have spoken, a few times), nevertheless it’s a heck of a lot better than letting authors ban whoever they want from their posts.
The fact that it’s nevertheless (apparently) not enough—that the combination of downvotes for the bad-but-not-overtly-bannable, and bans for the overly-bannable, is not enough for some authors—this is not some immutable fact of life. It simply speaks poorly of those authors.
Anyhow:
Well-kept gardens do not tend to die by accepting obviously norm-violating content.
Of course they do. That’s exactly how they tend to die. It’s precisely the obviously norm-violating content that is the problem, because if you accept that, then your members learn that your moderators either have an egregious inability to tell the good stuff from the bad, or that your moderators simply don’t care. That is deadly. That is when people simply stop trying to be good themselves—and your garden dies.
And there’s also another way in which well-kept gardens tend to die: when the moderators work to prevent the members from maintaining the garden; when “grass-roots” maintenance efforts—done, most commonly, simply with words—are punished, while the offenders are not punished. That is when those members who contribute the most to the garden’s sanctity—those who put in effort to rebut bad arguments, for instance, or otherwise to enforce norms and practices of good discussion and good thinking—will become disgusted with what they perceive as the moderators betraying the garden to its enemies.
Yes, shockingly, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker!
This seems to be just another way to describe what I wrote in the grandparent, except that your description has the connotation of something fine and reasonable and unproblematic, whereas mine obviously does not.
This seems to me to be the crux of the issue.
There’s a thing that happens in sports and related disciplines wherein the club separates into two different sections, where there is a competition team and there’s everybody else trying to do the sport and have a good time. There are very sharp differences in mindset between the teams.
In the competition team every little weakness or mistake is brutally hammered out of you, and the people on the team like this. It’s making them stronger and better, they signed up for it. But if a beginner tried to join them, the beginner would just get crushed. They wouldn’t get better, and they would probably leave and say their competitive-minded teammates are being jerks.
Without any beginners though, there is no competition team. The competitors all used to be beginners, and would have gotten crushed in the hyperbaric training chamber of their current team culture.
I think you are trying to push for a competition team, and Habryka is not.
Competition teams are cool! I really like them in their time and place. I think the AI Alignment forum is a little bit like this with their invite-only setup (which is a notable feature of many competition teams).
You need the beginner space though. A place where little babblinghalf-formed sprouting ideas can grow without being immediately stomped down for being insufficiently rigorous.
Another angle on the same phenomenon: If you notice someone has a faulty foundation in their house of understanding they are building, there are two fundamentally different approaches one could take. You could either:
Be a Fellow Builder, where you point out the mistake in a friendly way (trying not to offend, because you want more houses of understanding built)
Be a Rival Builder, where you crush the house, thereby demonstrating the faulty foundation decisively. (where you only want the best possible houses to even be built at all, so whether that other builder comes back is irrelevant)
I think Habryka is building LessWrong for Fellows, not Rivals.
wants to work collaboratively with others to figure out what’s true
My impression is that you want LessWrong to be a place of competitive truth-seeking, and Habryka is guiding LessWrong towards collaborative truth-seeking.
I think it’s fine to want a space with competitive dynamics. That’s just not what LessWrong is trying to be.
(I do appreciate the attempt at trying to bridge the epistemic gap, but just to be clear, this does not capture the relevant dimensions in my mind. The culture I want on LessWrong is highly competitive in many ways.
I care a lot about having standards and striving in intense ways for the site. I just don’t think the way Said does it really produces that, and instead think it mostly produces lots of people getting angry at each other while exacerbating tribal dynamics.
The situation seems more similar to having a competitive team where anyone gets screamed at for basically any motion, with a coach who doesn’t themselves perform the sport, but just complaints in long tirades any time anyone does anything, making references to methods of practice and training long-outdated, with a constant air of superiority. This is indeed a common error mode for competitive sports teams, but the right response to that is not to not have standards, it’s to have good standards and to most importantly have some functional way of updating the standards.)
So you want a culture of competing with each other while pushing each other up, instead of competing with each other while pushing each other down. Is that a fair (high-level, abstract) summary?
I think there is something in the space, but I wouldn’t speak in absolutes this way. I think many bad things deserve to be pushed down. I just don’t think Said has a great track record of pushing down the right things, and the resulting discussions seem to me to reliably produce misunderstandings and confusions.
I think a major thing that I do not like is “sneering”. Going into the cultural context of sneering and why it happens and how it propagates itself is a bit much for this comment thread, but a lot of what I experience from Said is that kind of sneering culture, which interfaces with having standards, but not in a super clear directional way.
I think you are trying to push for a competition team, and Habryka is not.
No. This idea was already discussed in the past, and quite definitively rejected. (I don’t have the links to the previous discussions handy, though I’ll try to dig them up when I have some time. But I am definitely not doing anything of the sort.)
What you describe is a reasonable guess at the shape of the disagreement, but I’m afraid that it’s totally wrong.
EDIT: Frankly, I think that the “mystery” has already been solved. All subsequent comments in this vein are, in essence, a smokescreen.
I see the disagreement react, so now I’m thinking maybe LessWrong is trying to be a place where both competitive and collaborative dynamics can coexist, and giving authors the ability to ban users from commenting is part of what makes the collaborators space possible?
“‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”
Commenting to register my interest: I would like to read this essay. As it stands, “Well-Kept Gardens” seems widely accepted. I can say I have internalized it. It may not have been challenged at any length since the original comment thread. (Please correct me with examples.)
I see, thanks.
Am I to take this as a statement of a moderation decision, or merely your personal opinion?
If the former—then, of course, I hear and obey. (However, for the remainder of this comment I’ll assume that it’s the latter.)
No, I don’t think that I’m strawmanning anything. You keep saying this, and then your supposed corrections just restate what I’ve said, except with different valence. For instance:
This seems to be just another way to describe what I wrote in the grandparent, except that your description has the connotation of something fine and reasonable and unproblematic, whereas mine obviously does not.
Of course people have such preferences! Indeed, it’s not shocking at all! People prefer not to have their bad ideas challenged, they prefer not to have obvious gaps in their reasoning pointed out, they prefer that people treat all of their utterances as deserving of nothing less than “curious”, “kind”, “collaborative” replies (rather than pointed questions, direct and un-veiled criticism, and a general “trial by fire”, “explore it by trying to break it” approach)?! Well… yeah. Duh. Humans are human. No one is shocked.
(And people will, if asked, couch these preferences in claims about “bad discourse in plausibly deniable ways”, etc.? Again: duh.)
And I must point out that for all your complaints about strawmanning, you don’t seem to hesitate in doing that very thing to me. In your reply, you write as if I hadn’t included the parenthetical, where I clarify that of course I can understand the mindset in question, if I allow certain unflattering hypotheses into the space of possibilities. You might perhaps imagine reasons why I would be initially reluctant to do this. But that’s only initially. To put it another way, I have a prior against such hypotheses, but it’s not an insuperable one.
So, yes, I understand just fine; I am quite capable of “modeling the preferences” of such people as you mention. No doubt you will reply: “no, actually you don’t, and you aren’t”. But let’s flesh out this argument, proactively. Here’s how it would go, as far as I can tell:
“You are ascribing, to individuals who are clearly honest people of high integrity and strength of character, preferences and motivations which are indicative of the opposite of those traits. Therefore, your characterization cannot be accurate.”
“One man’s modus ponens is another man’s modus tollens. The behavior of the people in question points unambiguously at their possessing the ascribed preferences and motivations (which are hardly improbable a priori, and must be actively fought against even by the best of us, not simply assumed not to be operative). Therefore, perhaps they are not quite so honest, their integrity not so high, and their strength of character not so great.”
I don’t know what exactly you’d then say in response to this—presumably you won’t be convinced, especially since you included yourself in the given set of people. And, to be clear, I don’t think that disagreeing with this argument is evidence of anything; I am certainly not saying anything like “aha, and if you reject this argument that says that you are bad, then that just proves that you are bad!”.
I outline this reasoning only to provide a countervailing model, in response your own argument that I am simply clueless, that I have some sort of inability to understand why people do things and what they want, etc. No, I certainly do have a model of what’s going on here, and it predicts precisely what we in fact observe. You can argue that my model is wrong and yours is right, but that’s what you’ll have to argue—“you lack a model that describes and predicts reality” is an argument that’s available to you in this case.
One of these days, I will probably need to write an essay, which will be titled “‘Well-Kept Gardens Die By Pacifism’ Considered Harmful”. That day will not be today, but here’s a small down payment on that future essay:
When I read that essay, I found it pretty convincing. I didn’t see the problems, the mistakes—because I’d never been a moderator myself, and I’d never run a website.
That has changed now. And now that I have had my own experience of running an online forum (for five years now)—having decisions to make about moderation, having to deal with spam and egregious trolls and subtle trolls and just bad posters and crazy people and all sorts of things—now that I’ve actually had to face, and solve, the problems that Eliezer describes…
… now I can see how dreadfully, terribly wrong that essay is.
(Maybe the problem is that “Well-Kept Gardens Die By Pacifism” was written before Eliezer really started thinking about incentives? Maybe his own frustration with low-quality commenters on Overcoming Bias led him to drastically over-correct when trying to establish principles and norms for the then-new Less Wrong? Maybe he forgot to apply his own advice to his thinking about forum moderation? I don’t know, and a longer and deeper exploration of these musings will have to wait until I write the full version of this post.)
Stiil, I don’t want to give the impression that the essay is completely wrong. Eliezer writes:
Downvoting. He’s talking about downvoting. Not banning! That was a mistake this essay could have included, but didn’t. (Perhaps because Eliezer hadn’t thought of it yet? But I do generally default to thinking well of people whose writings I esteem highly, so that is not my first hypothesis.)
And while the karma system has its own problems (of which I have spoken, a few times), nevertheless it’s a heck of a lot better than letting authors ban whoever they want from their posts.
The fact that it’s nevertheless (apparently) not enough—that the combination of downvotes for the bad-but-not-overtly-bannable, and bans for the overly-bannable, is not enough for some authors—this is not some immutable fact of life. It simply speaks poorly of those authors.
Anyhow:
Of course they do. That’s exactly how they tend to die. It’s precisely the obviously norm-violating content that is the problem, because if you accept that, then your members learn that your moderators either have an egregious inability to tell the good stuff from the bad, or that your moderators simply don’t care. That is deadly. That is when people simply stop trying to be good themselves—and your garden dies.
And there’s also another way in which well-kept gardens tend to die: when the moderators work to prevent the members from maintaining the garden; when “grass-roots” maintenance efforts—done, most commonly, simply with words—are punished, while the offenders are not punished. That is when those members who contribute the most to the garden’s sanctity—those who put in effort to rebut bad arguments, for instance, or otherwise to enforce norms and practices of good discussion and good thinking—will become disgusted with what they perceive as the moderators betraying the garden to its enemies.
This seems to me to be the crux of the issue.
There’s a thing that happens in sports and related disciplines wherein the club separates into two different sections, where there is a competition team and there’s everybody else trying to do the sport and have a good time. There are very sharp differences in mindset between the teams.
In the competition team every little weakness or mistake is brutally hammered out of you, and the people on the team like this. It’s making them stronger and better, they signed up for it. But if a beginner tried to join them, the beginner would just get crushed. They wouldn’t get better, and they would probably leave and say their competitive-minded teammates are being jerks.
Without any beginners though, there is no competition team. The competitors all used to be beginners, and would have gotten crushed in the hyperbaric training chamber of their current team culture.
I think you are trying to push for a competition team, and Habryka is not.
Competition teams are cool! I really like them in their time and place. I think the AI Alignment forum is a little bit like this with their invite-only setup (which is a notable feature of many competition teams).
You need the beginner space though. A place where little babbling half-formed sprouting ideas can grow without being immediately stomped down for being insufficiently rigorous.
Another angle on the same phenomenon: If you notice someone has a faulty foundation in their house of understanding they are building, there are two fundamentally different approaches one could take. You could either:
Be a Fellow Builder, where you point out the mistake in a friendly way (trying not to offend, because you want more houses of understanding built)
Be a Rival Builder, where you crush the house, thereby demonstrating the faulty foundation decisively. (where you only want the best possible houses to even be built at all, so whether that other builder comes back is irrelevant)
I think Habryka is building LessWrong for Fellows, not Rivals.
From the New User’s Guide:
My impression is that you want LessWrong to be a place of competitive truth-seeking, and Habryka is guiding LessWrong towards collaborative truth-seeking.
I think it’s fine to want a space with competitive dynamics. That’s just not what LessWrong is trying to be.
(I do appreciate the attempt at trying to bridge the epistemic gap, but just to be clear, this does not capture the relevant dimensions in my mind. The culture I want on LessWrong is highly competitive in many ways.
I care a lot about having standards and striving in intense ways for the site. I just don’t think the way Said does it really produces that, and instead think it mostly produces lots of people getting angry at each other while exacerbating tribal dynamics.
The situation seems more similar to having a competitive team where anyone gets screamed at for basically any motion, with a coach who doesn’t themselves perform the sport, but just complaints in long tirades any time anyone does anything, making references to methods of practice and training long-outdated, with a constant air of superiority. This is indeed a common error mode for competitive sports teams, but the right response to that is not to not have standards, it’s to have good standards and to most importantly have some functional way of updating the standards.)
So you want a culture of competing with each other while pushing each other up, instead of competing with each other while pushing each other down. Is that a fair (high-level, abstract) summary?
I think there is something in the space, but I wouldn’t speak in absolutes this way. I think many bad things deserve to be pushed down. I just don’t think Said has a great track record of pushing down the right things, and the resulting discussions seem to me to reliably produce misunderstandings and confusions.
I think a major thing that I do not like is “sneering”. Going into the cultural context of sneering and why it happens and how it propagates itself is a bit much for this comment thread, but a lot of what I experience from Said is that kind of sneering culture, which interfaces with having standards, but not in a super clear directional way.
No. This idea was already discussed in the past, and quite definitively rejected. (I don’t have the links to the previous discussions handy, though I’ll try to dig them up when I have some time. But I am definitely not doing anything of the sort.)
What you describe is a reasonable guess at the shape of the disagreement, but I’m afraid that it’s totally wrong.
EDIT: Frankly, I think that the “mystery” has already been solved. All subsequent comments in this vein are, in essence, a smokescreen.
I see the disagreement react, so now I’m thinking maybe LessWrong is trying to be a place where both competitive and collaborative dynamics can coexist, and giving authors the ability to ban users from commenting is part of what makes the collaborators space possible?
Commenting to register my interest: I would like to read this essay. As it stands, “Well-Kept Gardens” seems widely accepted. I can say I have internalized it. It may not have been challenged at any length since the original comment thread. (Please correct me with examples.)