I should comment publicly on this; I’ve talked with various people about it extensively in private. In case you just want my conclusion before my reasoning, I am sad but weakly supportive. An outline of six points, which I will maybe expand on if people ask questions:
I should link some previous writing of mine that wasn’t about Said:
When discussing the death of Socrates, I think it’s plausible that Socrates ‘had it coming’ because he was attacking the status-allocation methods of Athens, which were instrumental in keeping the city alive. That is, the ‘corrupting the youth’ charge might have been real and the sort of thing that it makes sense to watch out for.
In response to Zack’s post Lack of Social Grace is an Epistemic Virtue, I responded that the Royal Society didn’t think so. It seems somewhat telling to me that academic culture—the one I call “scholarly pedantic argumentative culture”, predates good science by a long time, and is clearly not sufficient to produce science. You need something more, and when I read about the Royal Society or the Republic of Letters I get a sense that they took worldly considerations seriously. They were trying to balance epistemic and instrumental rationality, in a way oft discussed on LW.
I don’t think I ever had a personal problem with Said or his comments; I generally found them easy to read and easy to respond to.
(See, for example, this recent exchange, where Said asked a clarifying question that I found easy to answer. The conversation continued from there—my comment was unfortunately a bit confusing—but I liked Gordon’s comment that ended the conversation.)
In particular, I was defensive of Said during the moderation discussion which involved him responding to my post (see this comment for me responding to habryka about how I read Said’s comment), and have generally been defensive of Said in other moderation discussions. I think it is important that LessWrong not lose sight of core rationalist virtues, and not fall into the LinkedIn attractor.
In particular I think we should keep as a live hypothesis “this person finds Said’s comments annoying because they are directing attention at the hole in their argument.” As someone who likes finding the hole in their argument and then developing the argument further, this never annoyed me. But this isn’t the only hypothesis and I think often Said or Zack acted as tho it was.
I am the person that caused Duncan to crystallize the concept of ‘emotionally tall’ discussed here (ctrl-f for it, you don’t have the read the rest of the post). In many ways this is good for a moderator—my skin is thick enough that I don’t have to worry about many interactions—but in some ways it is bad (in that behavior which is driving people away doesn’t bother me personally, and I need to be deliberately ‘looking out for’ users in a way that I don’t have to look out for myself). I think I used to view this as “a rationalist virtue I possess” and now I view it more as “an incidental fact about me”—like, my IQ is also relevant to my rationality but it’s not really a “rationalist virtue” instead of the amount of horsepower that I’m working with.
I think cultural effects are important and Said’s case merited this much time and attention.
I really do think Said has positive effects and has put nontrivial effort into making rationality broadly available and am sad to see him go.
I also think Said has negative effects and am hopeful about seeing what happens on LessWrong without him.
I had hoped we would get to ground on some of the disagreements on cultural preferences and values, but what happened was mostly Said and Zack laid out their models, and Oli and the mod team laid out their models, and I don’t think we ever successfully identified cruxes for both sides. Like, I’m still not sure what Zack or Said think of the Royal Society example; Zack talks about it a bit in another comment on that page but not in a way that feels connected to the question of how to balance virtues against each other, and what virtues cultures should strive towards. (Said, in an email, strongly rejects my claim that there’s a difference between his culture of commenting and the Royal Society culture of commenting that I describe.)
I think early LessWrong was very focused on biases / the psychological literature on irrationality and formulating defenses against those things. I think in that framing, pointing out that something is impeding the flow of information is almost enough to end the conversation on its own. I think Said and Zack were pretty easily able to point to “and this is how your proposal blocks some information flow that is good.”
I think later LessWrong was more focused on holistic / integrated approaches. Asking “what would the Bayesian superintelligence do in this situation?” is a pretty different question from “am I running afoul of my checklist of biases?”, altho often it involves checking your checklist of biases. A master carpenter still uses their string.
In some ways, this actually reminds me of the EDT-CDT-FDT progression in decision theory. EDT considers all evidence, which causes it to make some dumb mistakes. CDT rules out a class of evidence, which avoids EDT’s dumb mistakes, but causes some subtle mistakes. FDT rules back in a narrower category of the evidence that CDT rules out, which avoids those subtle mistakes. But from CDT’s perspective, the evidence that FDT is ruling back in is illegitimate, and it’s a mistake to return to superstition.
I basically agree with Said’s view that this is a principles disagreement, and it’s jumping ahead of ourselves to simply declare that “our view is more sophisticated than yours; you don’t understand ours.”
Nevertheless I do believe that our view is more sophisticated, and I operationalize this by something like ITT-passing; I think it’s generally the case that I can see the thing Said or Zack is pointing out, and in the reverse direction I mostly get the sense that they rarely see the criticism, and if they do, it’s only as something that seems fundamentally illegitimate to them. Critic Contributions are Logically Irrelevant is a crisp example of this, I think; people often raise objections about commenters that don’t make sense as logical criticisms. But if they aren’t intended as logical criticisms, that seems irrelevant to me. (Perhaps it is worth rereading Feeling Rational.)
I think this took so long because the balance of positives and negatives was so close, and so we were ambivalent for a long time.
I suggested running an Athenian ostracism process or similar. I think it’s maybe worth public discussion of whether or not that would have been better?
This is like Alicorn’s ‘modularity’ proposal, but different. Whereas that one rested on ‘the mods are tired of you’, this one rested on ‘the populace is tired of you’ (or afraid of you, or so on). The Athenian citizens would vote on whether or not to have an ostracism, and then if they decided to have one, the citizens could write down a name. If enough people voted against someone, they would be exiled for 10 years.
The benefits I see from this are threefold:
shared reality on the question “but is Said sufficiently annoying to the bulk of LW citizens?”
asking “who is the worst commenter on LW?”. In several moderation discussions around Said, we’ve come up with various metrics to evaluate “most disliked user”, and identified some problem users that hadn’t risen to mod attention by being involved in large blowups.
Legitimacy of democracy. It’s one thing to say “we’ve received complaints” and another thing to say “yeah, most people don’t want you here”, and I think we can’t say the latter at present.
There are many drawbacks, however.
Elections necessarily involve lower context than expert decision-making. If we have deep and detailed models of moderation, and most users are running some simpler process, ostracisms will be settled by the simpler process instead of the deep models.
Despite lower context, elections generally involve higher cost! A thousand LWers considering the question of which user annoys them the most (net the value they provide) could pretty easily end up taking longer than the moderation discussions that we had, long and extensive as they were.
This also doesn’t involve corrective effort. We talk with problem users relatively early in the process, and sometimes the problem gets solved. This is instead a blunt instrument that knocks people out of the community, and engenders some unpleasant coalitional dynamics. (Would people put my name in, for suggesting that we be willing to exile people at all?)
People have trouble doing the accounting on diffuse responsibility. Will everyone that voted on the Said ban feel bad about LessWrong and their participation in it? How does that help the community being fun?
I generally buy habryka’s model of moderation here. I think there’s something about its applicability to Said that seems somewhat unclear to me.
My story of what happened with Said is that he’s not tracking some of the damage that he’s doing and he doesn’t think it’s his responsibility to track or mitigate that damage.
A lot of that damage started accumulating in people’s impressions on him which would be activated when they first looked at a comment (like most sites, we put usernames before the content), which would them cause them to take more damage on reading the comment than they would have if it were from someone else, in part because they would read it less charitably. “Is Said up to that destructive pattern again?”, they might ask themselves, in a way that makes them more likely to find it.
I think this also would show up in their interpretations of ambiguous evidence. Like, Benquo’s recent post on Said was cited as support for their position by both habryka (in the OP) and Said (here). My read is that both citations are correct because they’re focused on different narrow facets of Benquo’s post.
Unfortunately, once you accumulate enough of this damage it is very hard to restore a good state.
I think on one layer, it’s fair to describe this as “habryka is banning Said because he doesn’t like him.” But I think it’s more fair to describe this as “habryka doesn’t like Said because of <destructive pattern>, and is banning him for <destructive pattern>.”
I will finish with this comment from 12 years ago, in which I criticize Eliezer’s moderation practices. I was missing the concept of emotional tallness, then, and I think also missing the point about the conversation quality being worse because of indirect effects. I can see the younger me levelling a similar criticism at the mod team now.
I’m still not sure what Zack or Said think of the Royal Society example; Zack talks about it a bit in another comment on that page but not in a way that feels connected to the question of how to balance virtues against each other, and what virtues cultures should strive towards. (Said, in an email, strongly rejects my claim that there’s a difference between his culture of commenting and the Royal Society culture of commenting that I describe.)
This seems to be by far the most important crux, nothing else could’ve substantially changed attitudes on either side. Do environments widely recognized for excellence and intellectual progress generally have cultures of harsh and blunt criticism, and to what degree its presence/absence is a load-bearing part? This question also looks pretty important on its own, and the apparent lack of interest/attention is confusing.
Do environments widely recognized for excellence and intellectual progress generally have cultures of harsh and blunt criticism
To the best of my ability to detect, the answer is clearly and obviously “no” — there’s an important property of people not-bullshitting and not doing the LinkedIn thing, but you can actually do clear and honest and constructively critical communication without assholery (and it seems to me that the people who lump the two together have a skill issue and some sort of color-blindness; because they don’t know how to get the good parts of candor and criticism while not unduly hurting feelings, they assume that it can’t be done).
I should comment publicly on this; I’ve talked with various people about it extensively in private. In case you just want my conclusion before my reasoning, I am sad but weakly supportive. An outline of six points, which I will maybe expand on if people ask questions:
I should link some previous writing of mine that wasn’t about Said:
When discussing the death of Socrates, I think it’s plausible that Socrates ‘had it coming’ because he was attacking the status-allocation methods of Athens, which were instrumental in keeping the city alive. That is, the ‘corrupting the youth’ charge might have been real and the sort of thing that it makes sense to watch out for.
In response to Zack’s post Lack of Social Grace is an Epistemic Virtue, I responded that the Royal Society didn’t think so. It seems somewhat telling to me that academic culture—the one I call “scholarly pedantic argumentative culture”, predates good science by a long time, and is clearly not sufficient to produce science. You need something more, and when I read about the Royal Society or the Republic of Letters I get a sense that they took worldly considerations seriously. They were trying to balance epistemic and instrumental rationality, in a way oft discussed on LW.
I don’t think I ever had a personal problem with Said or his comments; I generally found them easy to read and easy to respond to.
(See, for example, this recent exchange, where Said asked a clarifying question that I found easy to answer. The conversation continued from there—my comment was unfortunately a bit confusing—but I liked Gordon’s comment that ended the conversation.)
In particular, I was defensive of Said during the moderation discussion which involved him responding to my post (see this comment for me responding to habryka about how I read Said’s comment), and have generally been defensive of Said in other moderation discussions. I think it is important that LessWrong not lose sight of core rationalist virtues, and not fall into the LinkedIn attractor.
In particular I think we should keep as a live hypothesis “this person finds Said’s comments annoying because they are directing attention at the hole in their argument.” As someone who likes finding the hole in their argument and then developing the argument further, this never annoyed me. But this isn’t the only hypothesis and I think often Said or Zack acted as tho it was.
I am the person that caused Duncan to crystallize the concept of ‘emotionally tall’ discussed here (ctrl-f for it, you don’t have the read the rest of the post). In many ways this is good for a moderator—my skin is thick enough that I don’t have to worry about many interactions—but in some ways it is bad (in that behavior which is driving people away doesn’t bother me personally, and I need to be deliberately ‘looking out for’ users in a way that I don’t have to look out for myself). I think I used to view this as “a rationalist virtue I possess” and now I view it more as “an incidental fact about me”—like, my IQ is also relevant to my rationality but it’s not really a “rationalist virtue” instead of the amount of horsepower that I’m working with.
I think cultural effects are important and Said’s case merited this much time and attention.
I really do think Said has positive effects and has put nontrivial effort into making rationality broadly available and am sad to see him go.
I also think Said has negative effects and am hopeful about seeing what happens on LessWrong without him.
I had hoped we would get to ground on some of the disagreements on cultural preferences and values, but what happened was mostly Said and Zack laid out their models, and Oli and the mod team laid out their models, and I don’t think we ever successfully identified cruxes for both sides. Like, I’m still not sure what Zack or Said think of the Royal Society example; Zack talks about it a bit in another comment on that page but not in a way that feels connected to the question of how to balance virtues against each other, and what virtues cultures should strive towards. (Said, in an email, strongly rejects my claim that there’s a difference between his culture of commenting and the Royal Society culture of commenting that I describe.)
I think early LessWrong was very focused on biases / the psychological literature on irrationality and formulating defenses against those things. I think in that framing, pointing out that something is impeding the flow of information is almost enough to end the conversation on its own. I think Said and Zack were pretty easily able to point to “and this is how your proposal blocks some information flow that is good.”
I think later LessWrong was more focused on holistic / integrated approaches. Asking “what would the Bayesian superintelligence do in this situation?” is a pretty different question from “am I running afoul of my checklist of biases?”, altho often it involves checking your checklist of biases. A master carpenter still uses their string.
In some ways, this actually reminds me of the EDT-CDT-FDT progression in decision theory. EDT considers all evidence, which causes it to make some dumb mistakes. CDT rules out a class of evidence, which avoids EDT’s dumb mistakes, but causes some subtle mistakes. FDT rules back in a narrower category of the evidence that CDT rules out, which avoids those subtle mistakes. But from CDT’s perspective, the evidence that FDT is ruling back in is illegitimate, and it’s a mistake to return to superstition.
I basically agree with Said’s view that this is a principles disagreement, and it’s jumping ahead of ourselves to simply declare that “our view is more sophisticated than yours; you don’t understand ours.”
Nevertheless I do believe that our view is more sophisticated, and I operationalize this by something like ITT-passing; I think it’s generally the case that I can see the thing Said or Zack is pointing out, and in the reverse direction I mostly get the sense that they rarely see the criticism, and if they do, it’s only as something that seems fundamentally illegitimate to them. Critic Contributions are Logically Irrelevant is a crisp example of this, I think; people often raise objections about commenters that don’t make sense as logical criticisms. But if they aren’t intended as logical criticisms, that seems irrelevant to me. (Perhaps it is worth rereading Feeling Rational.)
I think this took so long because the balance of positives and negatives was so close, and so we were ambivalent for a long time.
I suggested running an Athenian ostracism process or similar. I think it’s maybe worth public discussion of whether or not that would have been better?
This is like Alicorn’s ‘modularity’ proposal, but different. Whereas that one rested on ‘the mods are tired of you’, this one rested on ‘the populace is tired of you’ (or afraid of you, or so on). The Athenian citizens would vote on whether or not to have an ostracism, and then if they decided to have one, the citizens could write down a name. If enough people voted against someone, they would be exiled for 10 years.
The benefits I see from this are threefold:
shared reality on the question “but is Said sufficiently annoying to the bulk of LW citizens?”
asking “who is the worst commenter on LW?”. In several moderation discussions around Said, we’ve come up with various metrics to evaluate “most disliked user”, and identified some problem users that hadn’t risen to mod attention by being involved in large blowups.
Legitimacy of democracy. It’s one thing to say “we’ve received complaints” and another thing to say “yeah, most people don’t want you here”, and I think we can’t say the latter at present.
There are many drawbacks, however.
Elections necessarily involve lower context than expert decision-making. If we have deep and detailed models of moderation, and most users are running some simpler process, ostracisms will be settled by the simpler process instead of the deep models.
Despite lower context, elections generally involve higher cost! A thousand LWers considering the question of which user annoys them the most (net the value they provide) could pretty easily end up taking longer than the moderation discussions that we had, long and extensive as they were.
This also doesn’t involve corrective effort. We talk with problem users relatively early in the process, and sometimes the problem gets solved. This is instead a blunt instrument that knocks people out of the community, and engenders some unpleasant coalitional dynamics. (Would people put my name in, for suggesting that we be willing to exile people at all?)
People have trouble doing the accounting on diffuse responsibility. Will everyone that voted on the Said ban feel bad about LessWrong and their participation in it? How does that help the community being fun?
I generally buy habryka’s model of moderation here. I think there’s something about its applicability to Said that seems somewhat unclear to me.
My story of what happened with Said is that he’s not tracking some of the damage that he’s doing and he doesn’t think it’s his responsibility to track or mitigate that damage.
A lot of that damage started accumulating in people’s impressions on him which would be activated when they first looked at a comment (like most sites, we put usernames before the content), which would them cause them to take more damage on reading the comment than they would have if it were from someone else, in part because they would read it less charitably. “Is Said up to that destructive pattern again?”, they might ask themselves, in a way that makes them more likely to find it.
I think this also would show up in their interpretations of ambiguous evidence. Like, Benquo’s recent post on Said was cited as support for their position by both habryka (in the OP) and Said (here). My read is that both citations are correct because they’re focused on different narrow facets of Benquo’s post.
Unfortunately, once you accumulate enough of this damage it is very hard to restore a good state.
I think on one layer, it’s fair to describe this as “habryka is banning Said because he doesn’t like him.” But I think it’s more fair to describe this as “habryka doesn’t like Said because of <destructive pattern>, and is banning him for <destructive pattern>.”
I will finish with this comment from 12 years ago, in which I criticize Eliezer’s moderation practices. I was missing the concept of emotional tallness, then, and I think also missing the point about the conversation quality being worse because of indirect effects. I can see the younger me levelling a similar criticism at the mod team now.
This seems to be by far the most important crux, nothing else could’ve substantially changed attitudes on either side. Do environments widely recognized for excellence and intellectual progress generally have cultures of harsh and blunt criticism, and to what degree its presence/absence is a load-bearing part? This question also looks pretty important on its own, and the apparent lack of interest/attention is confusing.
To the best of my ability to detect, the answer is clearly and obviously “no” — there’s an important property of people not-bullshitting and not doing the LinkedIn thing, but you can actually do clear and honest and constructively critical communication without assholery (and it seems to me that the people who lump the two together have a skill issue and some sort of color-blindness; because they don’t know how to get the good parts of candor and criticism while not unduly hurting feelings, they assume that it can’t be done).
probably buried in noise, maybe write a question post about it?
Upvoted for this link, which I found valuable.