LessWrong Team
Ruby
Experimental Two-Axis Voting: “Overall” & “Agreement”
The LW team has spent the last few weeks developing alternative voting systems. We’ve enabled two-axis voting on this post. The two dimensions are:Overall: what is your overall feeling about the comment? Does it contribute positively to the conversation? Do you want to see more comments like this?
Agreement: do you agree with the position of this comment?
Separating these out allows for you to express more nuanced reactions to comments such as “I still disagree with what you’re arguing for, but you’ve raised some interesting and helpful points” and “although I agree with what you’re saying, I think this is a low-quality comment”.
Edited to Add: I checked with Jessica first whether she was happy for us to try this experiment with her post.- 24 Dec 2021 5:46 UTC; 42 points) 's comment on Reply to Eliezer on Biological Anchors by (
- 14 Feb 2022 1:27 UTC; 20 points) 's comment on EA Forum feature suggestion thread by (EA Forum;
- 24 Jun 2022 15:02 UTC; 6 points) 's comment on Product Managers: the EA Forum Needs You by (EA Forum;
- 28 Jan 2022 19:20 UTC; 4 points) 's comment on Vavilov Day Discussion Post by (
- 20 Dec 2021 22:18 UTC; 3 points) 's comment on Open & Welcome Thread December 2021 by (
Someone mentioned that they thought the Concepts / Tag Portal was really nifty and they only just got round to looking at it, and that they thought it was motivating for tagging. I probably should have included a screenshot in the text (just added), but here’s a comment with a larger one:
www.lesswrong.com/tags/all
I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.
What’s live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) “I will make a fuss” is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn’t illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.
Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don’t want to damage the relationship I have with the person who was pressuring me. I’m unhappy about it, but I still value that relationship. Heck, I haven’t named them. I should note that this person updated (or began reconsidering their position) after Zoe’s post and has since stopped applying any pressure on me/LessWrong.
With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I’ll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don’t know, I don’t fear any particularly terrible retribution myself, but I loathe to make “enemies”.
I’d like to think that I’ve got lots of integrity and will say true things despite pressures and incentives otherwise, but I’m definitely not immune to them.
- 17 Oct 2021 16:40 UTC; 2 points) 's comment on Zoe Curzi’s Experience with Leverage Research by (
First, I’m annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away...
I want to second this reaction (basically your entire second paragraph). I have been feeling the same but hadn’t worked up the courage to say it.
At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you’re [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You’ve spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You’ve written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don’t think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you’d be much happier with my ideal, you’d think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I’m really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a “this place is really under control” kind of ideal. I take responsibility for it not being so, and I’m sorry. I wouldn’t blame you for saying this isn’t good enough and wanting to leave[1], there are some pretty bad flaws.
But sir, you impugn my and my site’s honor. This is not a perfect garden, it also not a jungle. And there is an awful lot of gardening going on. I take it very seriously that LessWrong is not just any place, and it takes ongoing work to keep it so. This is approx my full-time job (and that of others too), and while I don’t work 80-hour weeks, I feel like I put a tonne of my soul into this site.
Over the last year, I’ve been particularly focused on what I suspect are existential threats to LessWrong (not even the ideal, just the decently-valuable thing we have now). I think this very much counts as gardening. The major one over last year is how to both have all the AI content (and I do think AI is the most important topic right now) and not have it eat LessWrong and turn it into the AI-website rather than the truth-seeking/effectiveness/rationality website which is actually what I believe is its true spirit[2]. So far, I feel like we’re still failing at this. On many days, the Frontpage is 90+% AI posts. It’s not been a trivial problem for many problems.
The other existential problem, beyond the topic, that I’ve been anticipating for a long time and is now heating up is the deluge of new users flowing to the site because of the rising prominence of AI. Moderation is currently our top focus, but even before that, every day – the first thing we do when the team gets in the morning – is review every new post, all first time submissions from users, and the activity of users who are getting a lot of downvotes. It’s not exactly fun, but we do it basically everyday[3]. In the interests of greater transparency and accountability, we will soon build a Rejected Content section of the site where you’ll be able to view the content we didn’t go live, and I predict that will demonstrate just how much this garden is getting tended, and that counterfactually the quality would be a lot lot worse. You can see here a recent internal document that describes my sense of priorities for the team.
I think the discourse norms and bad behavior (and I’m willing to say now in advance of my more detailed thoughts that there’s a lot of badness to how Said behaves) are also serious threats to the site, and we do give those attention too. They haven’t felt like the most pressing threats (or for that matter, opportunities, recently), and I could be making a mistake there, but we do take them seriously. Our focus (which I think has a high opportunity cost) has been turned to the exchanges between you and Said this week, plausibly you’ve done us a service to draw our attention to behavior we should be deeming intolerable, and it’s easily 50-100 hours of team attention.
It is plausible the LessWrong team has made a mistake in not prioritizing this stuff more highly over the years (it has been years – though Said and Zack and others have in fact received hundreds of hours of attention), and there are definitely particular projects that I think turned out to be misguided and less valuable than marginal moderation would have been, but I’ll claim that it was definitely not an obvious mistake that we haven’t addressed the problems you’re most focused on.
It is actually on my radar and I’ve been actively wanted for a while a system that reliably gets the mod team to show up and say “cut it out” sometimes. I suspect that’s what should have happened a lot earlier on in your recent exchanges with Said. I might have liked to say “Duncan, we the mods certify that if you disengage, it is no mark against you” or something. I’m not sure. Ray mentioned the concept of “Maslow’s Hierarchy of Moderation” and I like that idea, and would like to get soon to the higher level where we’re actively intervening in this cases. I regret that I in particular on the team am not great at dropping what I’m doing to pivot when these threads come up, perhaps I should work on that.
I think a claim you could make is the LessWrong team should have hired more people so they could cover more of this. Arguing why we haven’t (or why Lightcone as a whole didn’t keep more team members on LessWrong team) is a bigger deal. I think things would be worse if LessWrong had been bigger most of the time, and barring unusually good candidate, it’d be bad to hire right now.
All this to say, this garden has a lot of shortcomings, but the team works quite hard to keep it at least as good as it is and try to make it better. Fair enough if it doesn’t meet your standards or not how you’d do it, perhaps we’re not all that competent, fair enough.
(And also you’ve had a positive influence on us, so your efforts are not completely in vain. We do refer to your moderation post/philosophy even if we haven’t adopted it wholesale, and make use of many of the concepts you’ve crystallized. For that I am grateful. Those are contributions I’d be sad to lose, but I don’t want to push you to offer to them to us if doing so is too costly for you.)- ^
I will also claim though that a better version of Duncan would be better able to tolerate the shortcomings of LessWrong and improve it too; that even if your efforts to change LW aren’t working enough, there are efforts on yourself that would make you better, and better able to benefit from the LessWrong that is.
- ^
Something like the core identity of LessWrong is rationality. In alternate worlds, that is the same, but the major topic could be something else.
- ^
Over the weekend, some parts of the reviewing get deferred till the work week.
- ^
I warned them, I said it wasn’t safe to put an AI in a text box.
Curated. This post feels virtuous to me. I’m used to people talking about timelines in terms of X% chance of Y by year Z; or otherwise in terms of a few macro features (GDP doubling every N months, FOOM). This post, even if most of the predictions turn out to be false, is the kind of piece that enables us to start having specific conversations about how we expect things to play out and why. It helps me see what Daniel expects. And it’s concrete enough to argue with. For that, bravo.
In my capacity as moderator, I saw this post this morning and decided to leave it posted (albeit as Personal blog with reduced visibility).
I think limiting the scope of what can be discussed is costly for our ability to think about the world and figure out what’s true (a project that is overall essential to AGI outcomes, I believe) and therefore I want to minimize such limitations. That said, there are conversations that wouldn’t be worth having on LessWrong, topics that I expect would attract attention just not worth it–those I would block. However, this post didn’t feel like where I wanted to draw the line. Blocking this post feels like it would be cutting out too much for the sake of safety and giving the fear of adversaries too much control over of us and our inquiries. I liked how this post gave me a great summary of controversial material so that I now know what the backlash was in response to. I can imagine other posts where I feel differently (in fact, there was a recent post I told an author it might be better to leave off the site, though they missed my message and posted anyway, which ended up being fine).
It’s not easy to articulate where I think the line is or why this post seemed on the left of it, but it was a deliberate judgment call. I appreciate others speaking up with their concerns and their own judgment calls. If anyone ever wants to bring these up with me directly (not to say that comment threads aren’t okay), feel free to DM me or email me: ruby@lesswrong.com
To address something that was mentioned, I expect to change my response in the face of posting trends, if they seemed fraught. There are a number of measure we could potentially take then.- 3 Nov 2021 3:47 UTC; 2 points) 's comment on [Book Review] “The Bell Curve” by Charles Murray by (
I think being unable to reply to comments on your own posts is very likely a mistake and we should change that. (Possibly if the conditions under which we think that was warranted, we should issue a ban.)
”I’m downvoted because I’m controversial” is a go-to stance for people getting downvoted (and resultantly rate-limited), though in my experience the issue is quality rather than controversy (or rather both in combination).
Overall though, we’ve been thinking about the rate limit system and its effects. I think there are likely bad effects even if it’s successfully in some case reducing low quality stuff.
This content was moved from the main body of the post to this comment. After receiving some good feedback, I’ve decided I’ll follow the template of “advice section in comments” for most of my posts.
Some Quick Advice
Awareness
See if you can notice conversational cultures/styles which match what I’ve described.
Begin noticing if you lean towards a particular style.
Begin paying attention to whether those you discuss with might have a particular style, especially if it’s different from yours.
Start determining if different groups you’re a member of, e.g. clubs or workplaces, lean in one cultural direction or another.
Openness
Reflect on the advantages that cultures/styles different to your own have and others might use them instead.
Consider that on some occasions styles different to yours might be more appropriate.
Don’t assume that alternatives to your own culture are obviously wrong, stupid, bad, or lacking in skills.
Experimentation
Push yourself a little in the direction of adopting a non-default style for you. Perhaps you already do but push yourself a little more. Try doing so and feeling comfortable and open, if possible.
Ideal and Degenerate Forms of Each Culture
Unsurprisingly, each of the cultures has their advantages and weaknesses, mostly to do with when and where they’re most effective. I hope to say more in future posts, but here I’ll quickly list as what I think the cultures look like at their best and worst.
Combat Culture
At its best
Communicators can more fully focus their attention on their ideas the content rather than devoting thought to the impact of their speech acts on the emotions of others.
Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.
The very combativeness and aggression prove to all involved that they’re respected and included.
At its worst
The underlying truth-seeking nature of conversation is lost and instead becomes a fight or competition to determine who is Right.
The combative style around ideas is abused to dismiss, dominate, bully, belittle, or exclude others.
It devolves into a status game.
Nurture Culture
At its best
Everyone is made to feel safe, welcomed, and encouraged to participate without fear of ridicule, dismissal, or judgment.
People assist each other to develop their ideas, seeking to find their strongest versions rather than attacking their weak points. Curiosity pervades.
At its worst
Fear of inducing a negative feeling in others and the need to create positive feelings and impressions of inclusion dominate over any truth-seeking goal.
Empathy becomes pathological and ideas are never criticized.
Communicators spend most of their thought and attention on the social interaction itself rather than the ideas they’re trying to exchange.
- Conversational Cultures: Combat vs Nurture (V2) by 31 Dec 2019 20:23 UTC; 142 points) (
- 16 Dec 2019 16:41 UTC; 12 points) 's comment on Conversational Cultures: Combat vs Nurture (V2) by (
Continuing our experiments with the voting system, I’ve enabled two-axis voting for this thread too.
The two dimensions are:Overall (left and right arrows): what is your overall feeling about the comment? Does it contribute positively to the conversation? Do you want to see more comments like this?
Agreement (check and cross): do you agree with the position of this comment?
- 26 Dec 2021 12:57 UTC; 5 points) 's comment on Stefan_Schubert’s Quick takes by (EA Forum;
Strong upvote. Thank you for writing this, it articulates the problems better than I had them in my head and enhances my focus. This deserves a longer reply, but I’m not sure if I’ll get to write it today, so I’ll respond with my initial thoughts.
What I really want from LessWrong is to make my own thinking better, moment to moment. To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers. To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.
I want this too.
In the big, important conversations, the ones with big stakes, the ones where emotions run high—
I don’t think LessWrong, as a community, does very well in those conversations at all.Regarding the three threads you list, I, others involved in managing of LessWrong, and leading community figures who’ve spoken to me are all dissatisfied with how those conversations went and believe it calls for changes in LessWrong.
Solutions I am planning or considering:
Technological solutions (i.e. UI changes). Currently, I think it’s difficult to provide norm-enforcing feedback on comments (you are required to write another comment, which is actually quite costly). One is also torn between signalling agreement/disagreement with statement and approval/disapproval of the reasoning. These issues could be addressed by factoring karma into two axes (approve/disapprove, agree/disagree) and also possible something like “epistemic reacts” where you can easily tag a comment as exemplifying a virtue or vice. I think that would give standard-upholding users (including moderators) a tool to uphold the standards.
There’s a major challenge in all of this in that I see any norms you introduce as being additional tools that can be abused to win–just selectively call out your opponents for alleged violations to discredit them. This can maybe worked around, say giving react abilities only to trusted users or something, but it’s non-trivial.
Another thing is that new-users are currently on too-even footing with established users. You can make an account and your comments will look the same as a user who’s proven themselves. This could be addressed by marking new users as such (Hacker News does this) or we can create spaces where new users cannot easily participate (more on this in a moment).
Not a solution, but a problem to be solved: when it comes to users, high karma can in part indicate highly valuable contributions, but also is just a measure of engagement. Someone with hundreds of low scoring comments can have a much higher score than someone of higher standards with only a few standout posts. This means that karma alone is inadequate to segment out better users from lower standard and quality adhering ones.
I am interested in creating “Gardens within the Garden”. As you say, counting up, LessWrong does well compared to the GenPop but far from sufficient. I think it would be good to have a place where people can level-up, and a further higher-quality space towards which people can strive to be admitted to. Admission could be granted by moderators (likely) or by passing an adequate test (if we are able to create such a one), I imagine you (Duncan) would actually be quite helpful in designing it.
I think our new user system is woefully inadequate. The system needs to change so that admission to LessWrong as a commenter and poster is not something that is taken for granted, and that new users are made aware that many (most?) new users will be turned away (once their initial contributions seem low enough quality).
Standards need to be made clear to new users, and for that matter, they need to be clarified to everyone. This is hard because to to me at least, picking the right standards is not easy. Picking the wrong standards to enforce could kill LessWrong (which I think would be worse than living with the current students).
I think that starting by getting the standards for new users clear (“stopping the bleeding”) we can then begin to extend that onto the existing user base. As a general approach we (the moderators) have a much higher bar for banning long-term users over new users [1].
This is just the cached list that I’m able to retrieve on the spot. There are surely more good things that I’m forgetting or haven’t thought of.
I think it isn’t. I think that a certain kind of person is becoming less prevalent on LessWrong, and a certain other kind of person is becoming more prevalent, and while I have nothing against the other kind, I really thought LessWrong was for the first group.
It is the definitely the case that people who I want on LessWrong are not there because the discussion doesn’t meet their standards. They have told me. I want to address this, although it’s somewhat hard because the people I want tend to be opinionated about standards in ways that conflict, or at least whose intersection would be a norm-enforcement burden that neither moderators or users could tolerate. That said, I think there are improvements in quality that would be universally regarded as good and would shift the culture and userbase in good directions.
In no small part, the duty of the moderation team is to ensure that no LessWronger who’s trying to adhere to the site’s principles is ever alone, when standing their ground against another user (or a mob of users) who isn’t
I would really like this to be true.
Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like. Flood the site with training data.
If you can find me people capable of being these moderators, I will hire them. I think the number of people who have mastered the standards you propose and are also available is...smaller than I have been able to locate so far.
Timelines for things happening from LW team
Progress is a little slow at the moment. Since the restructuring into Lightcone Infrastructure, I’m the only full-time member of the LessWrong team. I still get help with various tasks from other Lightcone members, and jimrandomh independently does dev work as an open source contributor; however, I’m the only one able to drive large initiatives (like rescuing the site’s norms) forward. Right now the bulk of my focus on hiring [2]. Additionally, I’ve begun doing some work on the new user process, and I hope to begin are the experiments with karma factorization. Those are smaller steps than what’s required, unfortunately.
If you or someone you know is a highly capable software engineer with Rationalist virtue, please contact me. While the community does have many software developers, the number who are skilled enough and willing to live in Berkeley and work on LessWrong is not so high that it’s trivial to hire.
--[1] In the terminology of Raemon, I believe we have some Integrity Debt in disclosing how many new users we ban (and their content that we remove).
[2] It’s plausible I should drop hiring and just focused on everything in the OP/I mention above, but I consider LessWrong “exposed” right now since I’m neither technically strong enough or productive enough to maintain the site alone, which makes me reliant people outside the team, which is a kind of brittle way for things to be.
Curated. This is a great post. We (the mods) generally struggle to get people to write up thoughts worth hearing because they fear that they’re not yet defensible enough. Until now, I’d have encouraged people to share all their thoughts at various stages of development/research/refinement/etc, just with appropriate epistemic statuses attached. This post goes further and provides an actual specific approach that one can follow to write up ideas at any level of development. More than that, it provides a social license of which I approve.
The ideal I endorse (as a mod) is something like LessWrong is a place where you can develop and share your ideas wherever they’re at. It’s great to publish a well-researched or well-considered post that you’re confident in, but it can also be incredibly valuable to share nascent thoughts too. Doing so both allows you, the writer, to get early feedback, but also can also often provide readers something good enough to learn from and build upon. And it’s definitely much better to publish early than not at all!
A challenging aspect of posting less well-developed thoughts is they can elicit more negative feedback than a post that’s had a lot effort invested to counter objections, etc. This feedback can be hard and unpleasant to receive, especially if it’s worded bluntly. My ideal here, that might take work to achieve, is that our culture is one where commenters calibrate their feedback (or at least its tone) to be appropriate to the kind of post being made. If someone’s exploring an idea, encourage the exploration even as you might point a flaw in the process.
For people who especially concerned their thoughts aren’t ready for general publication, we built Shortform to be the home for earlier-stage material. The explicit purpose of Shortform is that you can share thoughts which only took you a relatively short amount of time to write. [However, posting as a regular LessWrong post can also be fine, if you’re comfortable. And mods can help you decide where to post if you’re unsure.]
This is hugely helpful, a great community service! Thanks so much, mingyuan.
Curated. I’ve got to hand it to this post for raw unadulterated expression of pure Ravenclaw curiosity at how the world (and we ourselves) work. It is morbid and it’s perhaps fortunate the images are broken, but I’m just enjoying how much the author is reveling in the knowledge and experience here.
I like the generalized lesson here of GO LOOK AT THE WORLD, it’s right there.
I don’t know that I have the stomach to do this myself, but glad people are!
Drama
I object to describing recent community discussions as “drama”. Figuring out what happened within community organizations and holding them accountable is essential for us to have a functioning community. [I leave it unargued that we should have community.]
Thank you for sharing such personal details for the sake of the conversation.
I don’t think the post fully conveyed it, but I think the employees were quite afraid of leaving and expected this to get them a lot of backlash or consequences. A particularly salient for people early in EA careers is what kind of reference they’ll get.
Think about the situation of leaving your first EA job after a few months. Option 1: say nothing about why you left, have no explanation for leaving early, don’t really get a reference. Option 2: explain why the conditions were bad, risk the ire of Nonlinear (who are willing to say things like “your career could be over in a couple of DMs”). It’s that kind of bind that gets people to keep persisting, hope it’ll get better.
One thing I do want to note is that while I think you’re pointing at a real phenomena, I don’t actually think the two examples you gave for my post are quite pointing at the right thing.
This itself serves as an interesting example. Even if a particular author isn’t bothered by certain comments (due to an existing relationship, being unusually stoic, etc), it is still possible for others to perceive those comments as aversive/hostile/negative.
This is a feature of reality worth noticing, even before we determine what the correct response to it is. It would suggest you could have a world with many LessWrong members discussing in a way that they all enjoyed, yet it appears hostile and uncivil to the outside world who assume those participating are doing so despite being upset. This possibly has bad consequences for getting new people to join (those who aren’t here). You might expect this if a Nurture-native person was exposed to a Combat culture.
If that’s happening a lot, you might do any of the following:
1) shift your subculture to represent the dominant outside one
2) invest in “cultural-onboarding” so that new people learn to understand people aren’t unhappy with the comments they’re receiving (of course, we want this to be true)
3) create different spaces: ones for new people who are still acculturating, and others for the veterans who know that a blunt critical remark is a sign of respect.
The last one mirrors how most interpersonal relationships progress. At first you invest heavily in politeness to signal your positive intent and friendliness; progressively, as the prior of friendliness is established, fewer overt signals are required and politeness requirements drop; eventually, the prior of friendliness is so high that it’s possible to engage in countersignalling behaviors.
A fear I have is that veteran members of blunt and critical spaces (sometimes LW) have learnt that critical comments don’t have much interpersonal significance and pose little reputational or emotional risk to them. That might be the rational [1] prior from their perspective given their experience. A new member to the space who is bringing priors from the outside world may rationally infer hostility and attack when they read a casually and bluntly written critical comment. Rather than reading it as someone engaging positively with their post and wanting to discuss, they just feel slighted, unwelcome, and discouraged. This picture remains true even if a person is not usually sensitive or defensive to what they know is well-intentioned criticism. The perception of attack can be the result of appropriate priors about the significance [2] of different actions.
If this picture is correct and we want to recruit new people to LessWrong, we need to figure out some way of ensuring that people know they’re being productively engaged with.
--------------------
Coming back to this post. Here there was private information which shifted what state of affairs the cited comments were Bayesian evidence for. Most people wouldn’t know that Raemon had requested Unreal copy the comment moved from FB (where he’d posted it only partially) or that Raemon has been housemates with Qiaochu for years. In other words, Raemon has strongly established relationships with those commenters and knows them to be friendly to him—but that’s not universal knowledge. The OP’s assessment might be very reasonable if you lacked that private info (knowing it myself already, it’s hard for me to simulate not knowing it). This is also info it’s not at all reasonable to expect all readers of the site to know.
I think it’s very unfortunate if someone incorrectly thinks someone else is being attacked or disincentivized from contributing. It’s worth thinking about how one might avoid it. There are obviously bad solutions, but that doesn’t mean there aren’t better ones than just ignoring the problem.
--------------------
[1] Rational as in the sense of reaching the appropriate conclusion with the data available.
[2] By significance I mean what is it Bayesian evidence for.
Warning to Duncan
(See also: Raemon’s moderator action on Said)
Since we were pretty much on the same page, Raemon delegated writing this warning to Duncan to me, and signed off on it.
Generally, I am quite sad if, when someone points/objects to bad behavior, they end up facing moderator action themselves. It doesn’t set a great incentive. At the same time, some of Duncan’s recent behavior also feels quite bad to me, and to not respond to it would also create a bad incentive – particularly if the undesirable behavior results in something a person likes.
Here’s my story of what happened, building off of some of Duncan’s own words and some endorsement of something I said previous exchange with him:
Duncan felt that Said engaged in various behaviors that hurt him (confident based on Duncan’s words) and were in general bad (inferred from Duncan writing posts describing why those behaviors are bad). Such bad/hurtful behaviors include strawmanning, psychologizing at length, and failing to put in symmetric effort. For example, Said argued that Duncan banned him from his posts because Said disagreed. I am pretty sympathetic to these accusations against Said (and endorse moderation action against Said) and don’t begrudge Duncan any feelings of frustration and hurt he might have.
Duncan additionally felt that the response of other users (e.g. in voting patterns) and moderators was not adequate.
and
Given what he felt to be the inadequate response from others, Duncan decided to defend himself (or try to cause others to defend him). His manner of doing so, I feel, generates quite a few costs that warrant moderator action to incentivize against Duncan or others imposing these costs on the site and mods in the future.
The following is a summary of what I consider Duncan’s self-defensive behavior (not necessarily in order of occurrence).
Arguing back and forth in the comments
Banned Said from his posts
Argued more in comments not on his own posts
Requested that the moderators intervene, and quickly (offsite)
Wrote a top-level post at least somewhat in response to Said (planned to write it anyhow, but prioritized based on Said interactions), and it was interpreted by others as being about Said and calling for banning him.
In further comments, identifies statements that he says cause he to categorize and treat Said as an intentional liar.
Says he’d prefer a world where both he and Said were banned than neither.
Accuses the LessWrong moderators of not maintaining a tended garden, and that perhaps should just leave.
Individually and done occasionally, I think many of these actions are fine. The “ban users from your posts” feature is there so that you don’t have to engage with a user you don’t want to, as a mod, I appreciate people flagging behavior they think isn’t good, writing top-level posts describing why you think certain behaviors are bad (in a timeless/universal way) also is good, and if the site doesn’t make you feel safe, saying so and leaving also seems legit (I’m sad if this is true, but I’d like to know it rather than someone leaving silently).
Requesting quick moderator intervention, denouncing that he categorizes and treats Said as an intentional liar, saying that he’d prefer both Said himself be banned than neither, and writing a post that at least some people interpreted as calling for Said to be banned, feel like a pretty “aggressive” response. Combined with the other behaviors that are more usually okay but still confrontational, it feels to me like Duncan’s response was quite escalatory in a way that generates costs.
First, I think it’s bad to have users on the site who others are afraid of getting into conflict with. Naturally, people weigh the expect value and expected costs from posting/commenting/etc, and I know that with high confidence myself and at least three others (and I assume quite a few more) are pretty afraid to get into conflict with Duncan, because Duncan argues long and hard and generally invests a lot of time to defend himself against what feels like harm, e.g. all the ways he has done so on this occasion. I assume here that others are similar to me (not everyone, but enough) in being quite wary of accidentally doing something Duncan reacts to as a terrible norm violation, because doing so can result in a really unpleasant conflict (this has happened twice that I know of with other LW team members).
I recognize that Duncan feels like he’s trying to make LessWrong a place that’s net positive for him to contribute, and does so in some prosocial ways (e.g. writes Basics of Rationalist Discourse), but I need to call out ways in which his manner doing also causes harm, e.g. a climate of fear where people won’t express disagreement because defending themselves against Duncan would be extremely exhausting and effortful.
This is worsened by the fact that often Duncan is advocating for norms. If he was writing about trees and you were afraid to disagree, it might not be a big deal. But if he is arguing norms for your community, it’s worse if you think he might be advocating something wrong but disagreeing feels very risky.
Second, Duncan’s behavior directly or indirectly requires moderator attention, sometimes fairly immediately (partly because he’s requested quick response, and partly because if there’s an overt conflict between users, mods really ought to chime in sooner rather than later). I would estimate that the team has collectively spent 40+ hours on moderation over two weeks in response to recent events (some of that I place on Said who probably needed moderation anyway), but the need to drop other work and respond to the conflict right now is time-consuming and disruptive. Not counting exactly, it feels like this has happened periodically for several years with Duncan.
Duncan is a top contributor to the site, and I think for the most part advocates for good norms, so it feels worth it to devote a good amount of time and attention to his requests, but only so much. So there’s a cost there I want to call out that was incurred from recent behavior. (I think that if Duncan had notified us of really not liking some of Said’s behavior and point to a thread, said he’d like a response within two months or else he might leave the site – that would have been vastly less costly to us than what happened.)
I don’t think we’ve previously pointed out the costs here, so it’s fair to issue a warning rather than any harsher action.
Duncan, if you do things that impose what feel like to me costs of:
Taking actions such that I predict users will be afraid to engage with you, at the same time as you advocate norms
You demand fast responses to things you don’t like, thereby costing a lot of resources from mods in excess of what seems reasonable (and you’re basically out of budget for a long while now)
The moderators will escalate moderator action in response, e.g. rate limits or bans of escalating duration.
A couple of notes of clarification. I feel that this warning is warranted on the basis of Duncan’s recent behavior re: Said alone, but my thinking is informed by similar-ish patterns from the past that I didn’t get into here. Also for other users wondering if this warning could apply to them. Theoretically, yes, but I think most users aren’t at all close to doing the things here that I don’t like. If you have not previously had extensive engagement with the mods about a mix of your complaints and behavior, then what I’m describing here as objectionable is very unlikely to be something you’re doing.
To close, I’ll say I’m sad that the current LessWrong feels like somewhere where you, Duncan, need to defend yourself. I think many of your complaints are very very reasonable, and I wish I had the ability to immediately change things. It’s not easy and there are many competing tradeoffs, but I do wish this was a place where you felt like it was entirely positive to contribute.