Moderation notes re: recent Said/Duncan threads
Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I’m stickying this post for a day-or-so.
Recently there’s been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.
For context, a quick recap of recent relevant events as I’m aware of them are. (I’m glossing over many details that are relevant but getting everything exactly right is tricky)
Duncan posts Basics of Rationalist Discourse. Said writes some comments in response.
Zack posts “Rationalist Discourse” Is Like “Physicist Motors”, which Duncan and Said argue some more and Duncan eventually says “goodbye” which I assume coincides with banning Said from commenting further on Duncan’s posts.
I publish LW Team is adjusting moderation policy. Lionhearted suggests “Basics of Rationalist Discourse” as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
Duncan publishes Killing Socrates, a post about a general pattern of LW commenting that alludes to Said but doesn’t reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into “is Said net positive/negative for LessWrong?” in a discussion section where Said can’t comment.
@gjm publishes On “aiming for convergence on truth”, which further discusses/argues a principle from Basics of Rationalist Discourse that Said objected to. Duncan and Said argue further in the comments. I think it’s a fair gloss to say “Said makes some comments about what Duncan did, which Duncan says are false enough that he’d describe Said as intentionally lying about them. Said objects to this characterization” (although exactly how to characterize this exchange is maybe a crux of discussion)
LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level “how do the culture/rules/moderation of LessWrong work?”.
I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I’m not 100% sure what happens after that, but we’ll reflect on the discussion and decide on whether to take any high-level mod actions.
If you want to weigh in, I encourage you to take your time even if there’s a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it’s escalating, take at least a 10 minute break and ask yourself what you’re actually trying to accomplish.
I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I’ll be the primary owner of the decision, although I expect if there’s significant disagreement among the mod team we’ll talk through it a lot). We’ll take into account arguments various people post, but we aren’t trying to reflect the wisdom of crowds.
So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.
- 14 May 2023 0:47 UTC; 9 points) 's comment on Dark Forest Theories by (
- 1 Aug 2023 0:22 UTC; 0 points) 's comment on Lack of Social Grace Is an Epistemic Virtue by (
Preliminary Verdict (but not “operationalization” of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
credibly commit to changing their behavior in a fairly significant way,
or, accept some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior.
or, be banned from commenting on other people’s posts (but still allowed to make new top level posts and shortforms)
(After the two comments they can continue to PM the LW team, although we’ll have some limit on how much time we’re going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I’d be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of “we learned new useful things about site governance”) there’s a limit to how much it’s worth moderating or mediating conflict re: two particular users.
So, something pretty significant needs to change.
A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly suppor rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them.
I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.
I still don’t really know what to do, but I want to flag that the the goal I’ll be aiming for here is “make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can’t do the cluster of things they want to do that feels damaging to me.”
If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I’d be pretty sad to resort to that option). I have some hope tech-based solutions can work
(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There’s enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who’s more legitimately aggrieved this particular week)
Re: Said:
One of the most common complaints I’ve gotten about LessWrong, from both new users as well as established, generally highly regarded users, is “too many nitpicky comments that feel like they’re missing the point”. I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it’s still an important/valid complaint.
Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)
We’ve had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he’s sticking around, I think we’d need some kind of tech solution. The outcome I want here is that in practice Said doesn’t bother people who don’t want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I’m skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)
Here are a couple ideas:
Easily-triggered-rate-limiting. I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day. I expect fine-tuning this to actually work the way I imagine in my head is a fair amount of work but not that much.
Proactive warning. If a post author has downvoted Said comments on their post multiple times, they get some kind of UI alert saying “Yo, FYI, admins have flagged this user as somewhat with a pattern of commenting that a lot of authors have found net-negative. You may want to take that into account when deciding how much to engage”.
There’s some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of “authors can ban users” is worth revisiting so my first impulse is to avoid investing in it further until we’ve had some more top-level discussion about the feature.
Why is it worth this effort?
You might ask “Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?”. Here are some areas I think Said contributes in a way that seem important:
Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. (edit: as Ben Pace notes, this is pretty significant, and I agree with his note that “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”)
Most of his comments are in fact just pretty reasonable and good in a straightforward way.
While I don’t get much value out of protracted conversations about it, I do think there’s something valuable about Said being very resistant to getting swept up in fad ideas. Sometimes the emperor in fact really does have no clothes. Sometimes the emperor has clothes, but you really haven’t spelled out your assumptions very well and are confused about how to operationalize your idea. I do think this is pretty important and would prefer Said to somehow “only do the good version of this”, but seems fine to accept it as a package-deal.
Re: Duncan
I’ve spent years trying to hash out “what exactly is the subtle but deep/huge difference between Duncan’s moderation preferences and the LW teams.” I have found each round of that exchange valuable, but typically it didn’t turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.
I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.
Here’s this month/year’s stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to “knock it off” quickly. But,
a) moderation time is limited
b) even in the world where we massively invest in moderation… the thing Duncan cares most about moderating quickly just doesn’t seem like it should necessarily be at the top of the priority queue to me?
I was surprised and updated on You Don’t Exist, Duncan getting as heavily upvoted as it did, so I think it’s plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative).
I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven’t actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)
I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy), but don’t necessarily agree on how. It’d be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
I don’t know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false.
I continue to think it’s fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)
Some goals I’d have are:
people on LessWrong feel safe that they aren’t likely to get into sudden, protracted conflict with Duncan that persists outside his own posts.
the LessWrong team and Duncan are on-the-same-page about LW team not being willing to allocate dozens of hours of attention at a moments notice in the specific ways Duncan wants. I don’t think it’s accurate to say “there’s no lifeguard on duty”, but I think it’s quite accurate to say that the lifeguard on duty isn’t planning to prioritize the things Duncan wants, so, Duncan should basically participate on LessWrong as if there is, in effect “no lifeguard” from his perspective. I’m spending ~40 hours this week processing this situation with a goal of basically not having to do that again.
In the past Duncan took down all his LW posts when LW seemed to be actively hurting him. I’ve asked him about this in the past year, and (I think?) he said he was confident that he wouldn’t. One thing I’d want going forward is a more public comment that, if he’s going to keep posting on LessWrong, he’s not going to do that again. (I don’t mind him taking down 1-2 problem posts that led to really frustrating commenting experiences for him, but if he were likely to take all the posts down that undercuts much of the value of having him here contributing)
FWIW I do think it’s moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse and integrating it into our overall moderation policy. (It’s maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it’s kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it’s worth the LW team having a dedicated “our frame on what the site norms are” anyway)
In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.
I’ll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven’t really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here.
I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
… that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would’ve been less likely to leave and would be more likely to return with marginal movement in that direction.
I don’t know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like “how would you have felt if we had moved 25% in this direction,” I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more “what? No, we’re well-adapted to the current environment; we’re the ones who’ve been filtered for.”
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
Nod. I want to clarify, the diff I’m asking about and being skeptical about is “assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’”
i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I’d want is something like “given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular”.
I agree some manner of poll in this space would be good, if we could implement it.
FWIW, I don’t avoid posting because of worries of criticism or nitpicking at all. I can’t recall a moment that’s ever happened.
But I do avoid posting once in a while, and avoid commenting, because I don’t always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.
If I’d been on Lesswrong a lot 10 years ago, this wouldn’t stop me much. I used to be very… well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.
But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn’t worth the effort in most cases.
From what I’ve observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more “heavy handed” moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.
I’m not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There’s probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you’re ruled against.
Overall I don’t want to overpromise something like “if LW has a stronger concentration of force expectation for good conversation norms I’d participate 100x more instead of just reading.” But 10x more to begin with, certainly, and maybe more than that over time.
This is similar to the idea for the Sunshine Regiment from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what’s bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you’re assigned, you also fractionally encourage everyone else in your role to do the work they’re assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).)
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn’t large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity).
I also don’t think there’s enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like “a poll” of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)
All good points, and yeah I did consider the issue of “appeals” but considered “accept the judgement you get” part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.
But I’m glad the “pool of people” variation was tried, even if it wasn’t sustainable as volunteer work.
I’m not sure that’s true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don’t remember it.
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
I think Duncan thinks “Rationalist Discourse” Is Like “Physicist Motors” strawmans his position, and still gets mostly upvoted and if he wasn’t going out of his way to make this obvious, people wouldn’t notice. And when he does argue that this is happening, his comment doesn’t get upvoted much-at-all.
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(FYI @Vladimir_Nesov I’m curious if this sort of thing still feels ‘hair raisingly alarming’ to you)
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.
I think Said and Duncan are clearly channeling this conflict, but the confict is not about them, and doesn’t originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.
(This announcement is also rather hush-hush, it’s not a post and so I’ve only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)
It’s an update to me that you hadn’t seen it (I figured since you had replied to a bunch of other comments you were tracking the thread, and more generally figured that since there’s 360 comments on this thing it wasn’t suffering from lack-fo-scrutiny). But, plausible that we should pin it for a day when we make our next set of announcement comments (which are probably coming sometime this weekend, fwiw)
I meant this thread specifically, with the action announcement, not the post. The thread was started 4 days after the post, so everyone who wasn’t tracking the post had every opportunity to miss it. (It shouldn’t matter for the point about scrutiny that I in particular might’ve been expected to not miss it.)
Just want to note that I’m less happy with a lesswrong without Duncan. I very much value Duncan’s pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he’s doing. The fact that he’s being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it’s much clearer that his norms are operative, and I’ve been annoyed, but each of those times, despite being frustrated, I have found that I’m being pushed in the right direction and corrected for something I’m doing wrong.
I agree that it’s bad that his comments are often overly confrontational, but there’s no way to deliver constructive feedback that doesn’t involve a degree of confrontation, and I don’t see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I’d be happy to ask him to take a break. But this isn’t that world, and it seems much better to actively promote a norm of people saying they don’t have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad.
I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.
With regards to Said’s ‘general pattern’, I think there’s a dynamic around socially recognized gnosis where sometimes people will say “sorry, my inability/unwillingness to explain this to you is your problem” and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious—like polls or reacts or w/e—does seem good.
I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.
Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.
From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan’s post while Said and others were and continue to be banned from commenting on it.
From this point of view, I don’t see what either of Said or Duncan have done to justify any sort of ban, temporary or not.
This decision is based on mostly on past patterns with both of them, over the course of ~6 years.
The recent conflict, in isolation, is something where I’d kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people’s worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.
The motivation here is from a larger history. (I’ve summarized one chunk of that history from Said here, and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)
And notably, my preference is for this not to result in a ban. I’m hoping we can work something out. The thing I’m laying down in this comment is “we do have to actually work something out.”
I condemn the restrictions on Said Achmiz’s speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.
his speech is not being restricted in variety, it’s being ratelimited. the difference there is enormous.
Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question “credibly commit[ting] to changing their behavior in a fairly significant way”, “accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior”, or “be[ing] banned from commenting on other people’s posts”.
The first is a restriction on variety of speech. (I don’t see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won’t result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the “tech solution” of the second could be mere rate-limiting, but the “doesn’t depend on their continued behavior” clause makes me think something more onerous is intended.
(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don’t comment on the other case, but I’m deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)
The tech solution I’m currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe, he gets to comment more without limit.” But that’s fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”. The question here is “do you want the ‘real work’ of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can’t bother you and?” (which is what’s mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is “suddenly” in significant agreement about some “frame control” concept he’s never heard of. (I can’t find the exact comment atm but I remember him expressing alarm at the degree of consensus on frame control, in the comments of Aella’s post. There was consensus because somewhere between 50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years. I’m not sure if there’s a world where that discussion was happening on LW because frame-control tends to come up in dicey sensitive adversarial situations)
So, I think the censorship policy you’re imagining is a fabricated option.
My current guess of actual next steps are “Said gets 3 comments per post per day” restriction, is banned from commenting on shortform in particular (since our use case for that is specifically antithetical to the vibe Said wants), and then (after also setting up some other moderation tools and making some judgment calls on some other similar-but-lower-profile-users), messaging people like Logan Strohl and saying “hey, we’ve made a bunch of changes, we’d like it if you came in and tried using the site again”, and hope that this time it actually works.
(Duncan might get a similar treatment, for fairly different reasons, although I’m more optimistic about he/us actually negotiating something that requires less heavyhanded restriction)
We already have a user-level personal ban feature! (Said doesn’t like it, but he can’t do anything about it.) Why isn’t the solution here just, “Users who don’t want to receive comments from Said ban him from their own posts”? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.) It’s true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by individual users—currently, that’s Elizabeth, and DirectedEvolution, and one other user).
This would make Less Wrong worse for me. I want Said Achmiz to have unlimited, unconditional commenting privileges on my posts. (Unconditional means the software doesn’t stop Said from posting a fourth comment; “to start” is not unconditional if it requires a human to approve the fourth comment.)
More generally, as a long-time user of Less Wrong (original join date 26 February 2009, author of five Curated posts) and preceding community (first Overcoming Bias comment 22 December 2007, attendee of the first Overcoming Bias meetup on 21 February 2008), I do not want Said Achmiz to be a second-class citizen in my garden. If we have a user-level personal ban feature that anyone can use, I might or might not think that’s a good feature to have, but at least it’s a feature that everyone can use; it doesn’t arbitrarily single out a single user on a site-wide basis.
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account. Why is that? This seems like a question you should be able to answer.
Stipulating that votes on this comment are more than negligibly informative on this question… it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you’re counting as not being taken into account, which seems exactly backwards.
Some other random notes (probably not maximally cruxy for you but
1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
But we’ve had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don’t trust him to actually tell the difference in many edge cases.
We’ve spent a hundred+ person hours over the years thinking about how to limit Said’s damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
2. I did list “actually just encourage people to use the ban tool more” is an option. (DirectedEvolution didn’t even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
(Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn’t feel an obligation as a mod/site-admin to have a more open comment section)
3. I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
(I don’t expect that to really resolve your crux here but it seemed like it’s at least an improvement on the margin)
4. I think it’s plausible that the right solution is to ban him from shortform, use shortform as the place where people can talk about whatever they want in a more open/curious vibe. I currently don’t think this is the right call because I just think it’s… just actually a super reasonable, centrally supported use-case of top level posts to have sets of norms that are actively curious and invested. It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider. I think you have previously argued agains this, and Said expressed dissatisfaction with it elsewhere in this comment section.
(This solution probably wouldn’t address my concerns about Duncan though)
I am a little worried that this is a generalization that doesn’t line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I’m reluctant to suggest a lengthy evidence review, both because of the costs and because I’m somewhat uncertain of the benefits—if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say “actually Said isn’t annoying”, those authors are unlikely to find it convincing.)
In particular, I keep thinking about this comment (noting that I might be updating too much on one example). I think we have evidence that “Said can engage with open/curious/interpretative topics/posts in a productive way”, and should maybe try to figure out what was different that time.
I think in the sense of the general garden-style conflict (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that’s currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice.
There should be for a start just two options, Athenian Garden and Socratic Garden, so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines.
Excellent. I predict that Said wouldn’t be averse to voluntarily not commenting on “open/curious/cooperative” posts, or not commenting there in the kind of style that adherents of that culture dislike, so that “specifically banning Said” from that is an unnecessary caveat.
Well, I’m glad you’re telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said’s interaction style (of just asking people things, instead of falsely imagining that you can model them).
Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don’t like.
Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don’t regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that’s a threat to me and mine. A government that does that is not legitimate.
So, usually when people make this kind of “hostile paraphrase” in an argument, I tend to take it in stride. I mostly regard it as “part of the game”: I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don’t tell people to be more charitable to me; I don’t ask them to pass my ideological Turing test; I just say, “That’s not what I meant,” and explain the idea again; I’m happy to do the extra work.
In this particular situation, I’m inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that “criticize without trying to figure out what the OP is about” is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said’s ideological Turing test?
Right, so if someone complains about Said, point out that they’re free to strong-downvote him and that they’re free to ban him from their posts. That’s much less time-consuming than writing new code! (You’re welcome.)
Sorry, I thought your job was to run a website, not dictate to people how they should think and write? (Where part of running a website includes removing content that you don’t want on the website, but that’s not the same thing as decreeing that individuals must “integrat[e] the spirit-of-[your]-models into [their] commenting style”.) Was I mistaken about what your job is?
I am strongly opposed to this because I don’t think the proposed distinction cuts reality at the joints. (I’d be happy to elaborate on request, but will omit the detailed explanation now in order to keep this comment focused.)
We already let authors write their own moderation guidelines! It’s a blank text box! If someone happens to believe in this “cooperative vs. adversarial” false dichotomy, they can write about it in the text box! How is that not enough?
Because it’s a blank text box, it’s not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Also, moderation guidelines aren’t visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes.
(I assume Said mostly uses GW, since he designed it.)
I’ve been busy, so hadn’t replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I’ve done that at least twice now in this thread, I’m trying to better but seems important for me to notice and pay attention to).
I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style” line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as:
i.e. this isn’t about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming.
I expect you to still object to that for various reasons, and I think it’s reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)
FYI, my response to this is is waiting for an answer to my question in the first paragraph of this comment.
I’m still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it’s not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include:
This strikes me basically as a way to move the mod team’s role more into “setting good defaults” and less “setting the only way things work”. How much y’all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.
How technically troublesome would an allow list be?
Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments.
(Or if this feels more like a Said and/or Duncan specific issue, make the options “Unlimited”, “Limited”, and “None/Banned” then default to everyone at Unlimited except for Said and/or Duncan at Limited.)
My prediction is that those users are primarily upvoting it for what it’s saying about Duncan rather than about Said.
To spell out what evidence I’m looking at:
There is definitely some term in the my / the mod team’s equation for “this user is providing a lot of valuable stuff that people want on the site”. But the high level call the moderation team is making is something like “maximize useful truths we’re figuring out”. Hearing about how many people are getting concrete value out of Said or Duncan’s comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don’t comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.
I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors.
The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters.
If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here
Thank you for the complement!
With writing science commentary, my participation is contingent on there being a specific job to do (often, “dig up quotes from links and citations and provide context”) and a lively conversation. The units of work are bite-size. It’s easy to be useful and appreciated.
Writing posts is already relatively speaking not my strong suit. There’s no preselection on people being interested enough to drive a discussion, what makes a post “interesting” is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness.
My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw—essentially failing to adopt the “referee” role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan’s posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of “danger” in responding to Duncan’s posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use.
So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.
I’m not sure what other user you’re referring to besides Achmiz—it looks like there’s supposed to be another word between “about” and “and” in your first sentence, and between “about” and “could” in the last sentence of your second paragraph, but it’s not rendering correctly in my browser? Weird.
Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author’s intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters.
I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I’m not claiming to be passing anyone’s ideological Turing test.)
The other month I published a post that I was feeling pretty good about, quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn’t have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked.
In my worldview, this is exactly how things are supposed to work. I didn’t have satisfactory replies to the critical comments. Of course that’s going to result in downvotes! Of course it made me a little bit sad that day! (By “conservation of expected feelings”: I would have felt a little bit happy if the post did well.) Of course I’m going to try not to write posts relevantly “like that” in the future!
I’ve been getting the sense that a lot of people somehow seem to disagree with me that this is exactly how things are supposed to work?—but I still don’t think understand why. Or rather, I do have an intuitive model of why people seem to disagree, but I can’t quite permit myself to believe it, because it’s too uncharitable; I must not be understanding correctly.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 → Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “100%” but being more specific than that can be hard and is prone to disagreement.
Commenters of course have every right to say “I don’t understand this” and politely ask questions. But I, and I suspect the mods and most authors, reject the idea that publishing a piece on LessWrong gives me a duty to make every reader understand it. That may cost me karma or respect and I think that’s fine*, I’m not claiming a positive right to other people’s high regard.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
*although I think downvoting things I don’t understand is tricky specifically because it’s hard to tell where the problem lies, so I rarely do.
YES. I think this is hugely important, and I think it’s a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they’re lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
There’s absolutely a middle ground. There are many times when I ask questions—let’s say of an academic author—where I think the author is probably either wrong or misguided in their analysis. But outside of pointing out specific facts that I know are wrong and suspect the author might not have noticed, I never address these authors in the manner of a crank. If I bother to contact them, it’s to ask questions to do things like:
Describe my specific disagreement succinctly, and ask the author to explain why they think or approach the issue differently
Ask about the points in the author’s argument I don’t fully understand, in case those turn out to be cruxes
Ask what they think about my counterargument, on the assumption that they’ve already thought about it and have a pretty good answer that I’m genuinely interested in hearing
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
And this attitude is particularly corrosive to feelings of trust, collaboration, “jamming together,” etc. … it’s like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn’t offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights.
Which, yeah, that’s one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust!
(I choose martial arts specifically because it’s a domain full of anti-epistemic garbage and claims that don’t pan out.)
But in practice, few people will participate in such a martial arts academy for long, and it’s not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.
You’re describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you’re right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can’t, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they’re an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren’t there in the first place. It’s definitely more challenging to jam with dissonant characters like that (especially if they’re dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it’s important to realize that the problem isn’t so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.
Well, you mentioned the scenario as an illustration of a “particularly corrosive” attitude. It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy’s behavior is, how much of everyone’s time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result.
Maybe “deeply dysfunctional” was going too far, but I don’t think it’s reasonable to call that “way overconfident/projection-y”. Nor does the difference between “deeply dysfunctional” and “moderately dysfunctional” matter for jimmy’s point.
FYI, I’m inclined to upvote jimmy’s comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them). And your comment seems to be calling jimmy out inappropriately (as I’ve argued above), so I’m inclined to at least disagree-vote it.
“Let’s imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous” does not seem like a reasonable move to me.
Separately:
https://www.lesswrong.com/posts/WsvpkCekuxYSkwsuG/overconfidence-is-deceit
I think my feeling here is:
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
But it’s not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they’d stand by themselves.
Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. (“The way to jam”?) But e.g. “the problem isn’t so much the difficulty as the inability to overcome the difficulty” seems… well, I’d say this is overstated too, but I do think it’s pointing at something that seems valuable to keep in mind even if we accept that the gym is functional.
So I don’t think it’s unreasonable that the parent got significantly upvoted, though I didn’t upvote it myself; and I don’t think it’s unreasonable that your correction didn’t, since it looks correct to me but like it’s not responding to the main point.
Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I’d need it spelled out in more detail.
FWIW, that is a claim I’m fully willing and able to justify. It’s hard to disclaim all the possible misinterpretations in a brief comment (e.g. “deeply” != “very”), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
Yes, and that’s why I described the attitude as “dysfunctionally dissonant” (emphasis in original). It’s not a good way of challenging the instructors, and not the way I recommend behaving.
What I’m talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system’s combined dysfunction never becomes supercritical and instead decays towards productive cooperation.
That’s certainly one possibility. But isn’t it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don’t see, and which justify the confidence level I display?
It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like “Can I pass his ITT”/”Can I point to a flaw in his argument that makes him stutter if not change his mind”/etc.
To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn’t appear to come out of nowhere, but I do believe I am able to justify what I’m saying very well and won’t hesitate to do so if anyone wants further explanation or sees something which doesn’t seem to fit. And hey, if it turns out I’m wrong about how well supported my perspective is, I promise not to be a poor sport about it.
In absence of an object level counterargument, this is textbook ad hominem. I won’t argue that there isn’t a place for that (or that it’s impossible that my reasoning is flawed), but I think it’s hard to argue that it isn’t premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn’t uncommon for some of it to be right to an extent, but it’s really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren’t well substantiated.
I see that you’ve deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I’m pushing back against some of the things you’re saying because I think that it’s important to do so, but I do not harbor any ill will towards you nor do I think what you said was “ridiculous”. I hope you come back.
I thought it was a reference to, among other things, this exchange where Said says one of Duncan’s Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you’re observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you’re correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]
I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like “spending down and or making a bet with a limited resource (maybe two specific resources of “trust in the mods” and “some groups of people’s willingness to put up with the site being optimized a way they think is wrong.”)
Despite that, I think it is the right call to limit Said significantly in some way, but I don’t think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by “that many” is greater than zero.
I don’t remember this. I feel like Aella’s post introduced the term?
A better example might be Circling, though I think Said might have had a point of it hadn’t been carefully scrutinized, a lot of people had just been doing it.
Frame control was a pretty central topic on “what’s going on with Brent?” two years prior, as well as some other circumstances. We’d been talking about it internal at Lightcone/LessWrong during that time.
Hmm, yeah, I can see that. Perhaps just not under that name.
I think the term was getting used, but makes sense if you weren’t as involved in those conversations. (I just checked and there’s only one old internal lw-slack message about it from 2019, but it didn’t feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)
Ray writes:
For the record, I think the value here is “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”, and I don’t think that comes across in this bullet.
Yeah I agree with this, and agree it’s worth emphasizing more. I’m updating the most recent announcement to indicate this more, since not everyone’s going to read everything in this thread.
Great!
I feel like this incentivizes comments to be short, which doesn’t make them less aggravating to people. For example, IIRC people have complained about him commenting “Examples?”. This is not going to be hit hard by a rate limit.
‘Examples?’ is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
Spending my last remaining comment here.
I join Ray and Gwern in noting that asking for examples is generically good (and that I’ve never felt or argued to the contrary). Since my stance on this was called into question, I elaborated:
My recent experience has been that saying “this is half-baked” is not met with a subsequent shift in commentary, meeting the “Oh, I don’t have any yet, this is speculative, so YMMV” tone.
I think it would be nice if LW could have both tones:
I’m claiming this quite confidently; bring on the challenges, I’m ready to convince
I have a gesture in a direction I’m pretty sure has merit, but am not trying to e.g. claim that if others don’t update to my position they’re wrong; this is a sapling and I’d like help growing it, not help stepping on it.
Trying to do things in the latter tone on LW has felt, to me, extremely anti-rewarding of late, and I’m hoping that will change, because I think a lot of good work happens there. That’s not to say that the former tone is bad; it feels like they are twin pillars of intellectual progress.
Noting that my very first lesswrong post, back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl corrected me. As an introduction to posting on LW, that was pretty good—I’d hate to think that’s no longer acceptable.
At the same time, there is less room for it as the community got much bigger, and I’d probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it’s an issue.
fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don’t seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)
I agree—but think that now, if and when similarly initial thoughts on a conceptual model are proposed, there is less ability or willingness to engage, especially with people who are fundamentally confused about some aspect of the issue. This is largely, I believe, due to the volume of new participants, and the reduced engagement for those types of posts.
I want to reiterate that I actually think the part where Said says “examples?” is basically just good (and is only bad insofar as it creates a looming worry of particular kinds of frustrating, unproductive and time-consuming conversations that are likely to follow in some subsets of discussions)
(edit: I actually am pretty frustrated that “examples?” became the go-to example people talked about and reified as a kinda rude thing Said did. I think I basically agree this process is good:
Alice → writes confident posts without examples
Bob → says “examples?”
Alice → either gives (at least one, and yeah ideally 3) examples, or says “Oh, I don’t have any yet, this is speculative, so YMMV”, or doesn’t reply but feels a bit chagrined.
)
Oops, sorry for saying something that probabilistically implied a strawman of you.
I’m not sure what you think this is strong evidence of?
I don’t think it’s “strong” evidence per se, but, it was evidence that something I’d previously thought was more of a specific pet-peeve of Duncan’s, was more objected to by more LessWrongfolk.
(Where the thing in question is something like “making sweeping ungrounded claims about other people… but in a sort of colloquial/hyperbolic way which most social norms don’t especially punish)
Some evidence for that, also seems likely to get upvoted on the basis of “well written and evocative of a difficult personal experience”, or people relate to being outliers and unusual even if they didn’t feel alienated and hurt in quite the same way. I’m unsure.
I upvoted it because it made me finally understand what in the world might be going on in Duncan’s head to make him react the way he does
If the lifeguard isn’t on duty, then it’s useful to have the ability to be your own lifeguard.
I wanted to say that I appreciate the moderation style options and authors being able to delete and ban for their posts. While we’re talking about what to change and what isn’t working, I’d like to weigh in on the side of that being a good set of features that should be kept. Raemon, you’ve mentioned those features are there to be used. I’ve never used the capability and I’m still glad it exists. (I can barely use it actually.) Since site wide moderators aren’t going to intervene everywhere quickly (which I don’t think they should or even can, moderators are heavily outnumbered) then I think letting people moderate their local piece is good.
If I ran into lots of negative feedback I didn’t think was helpful and it wasn’t getting moderated by me or the site admins, I’d just move my writing to a blog on a different website where I could control things. Possibly I’d set up crossposting like Zvi or Jefftk and then ignore the LessWrong comment section. If lots of people do that, then we get the diaspora effect from late LessWrong 1.0. Having people at least crossposting to LessWrong seems good to me, since I like tools like the agreement karma and the tag upvotes. Basically, the BATNA for a writer who doesn’t like LessWrong’s comment section is Wordpress or Substack. Some writers you’d rather go elsewhere obviously, but Said and Duncan’s top level posts seem mostly a good fit here.
I do have a question about norm setting I’m curious about. If Duncan had titled his post “Duncan’s Basics of Rationalist Discourse” would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?
Yeah I think this’d be much less cause for concern. (I haven’t checked whether the rest of the post has anything else that felt LW-wide-police-y about it, I’d maybe have wanted a slightly different opening paragraph or something)
I think Duncan also posts all his articles on his own website, is this correct?
In that case, would it be okay to replace the article on LW with a link to Duncan’s website? So that the articles stay there, the comments stay here, the page with comments links the article, but the article does not link the page with comments.
I am not suggesting to do this. I am asking that if Duncan (or anyone else) hypothetically at some moment decided for whatever reason that he is uncomfortable with his articles being on LW, whether doing this (moving the articles elsewhere and replacing them with the links towards the new place) would be acceptable for you? Like, whether this could be a policy “if you decide to move away from LW, this is our preferred way to do it”.
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.
Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)
Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)
Obviously the exact ratio doesn’t have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.
I’m not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post.
i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building.
the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post.
counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?
I don’t see all comments as criticism. Many comments are of the building up variety! It’s that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.
Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.
The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post.
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one—the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it’s dialog. and there are lot of unproductive examples for that in LW. and it’s quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.
but i find rules that prevent the best things from happening as bad in some way that i can’t explain clearly. something like, I’m here to try to go higher. if it’s impossible, then why bother?
I also think it’s VERY restrictive rule. i wrote more then three comments here, and you are the first one to answer me. like, i’m just right now taking part in counter-example to “would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.”
i shared my opinions on very different and unrelated parts of this conversation here. this is my six comment. and i feel i reacted very low-heat. the idea i should avoid or conserve those comments to have only three make me want to avoid comment on LW altogether. the message i get from this rule is like… is like i assumed guilty of thing i literately never do, and so have very restricted rules placed on me, and it’s very unfriendly in a way that i find hard to describe.
like, 90% of the activity this rule will restrict is legitimate, good comments. this is awful false positive ratio. even if you don’t count the you-are-bad-and-unwelcome effect i feel from it and you, apparently, not.
Yeah this is the sort of solution I’m thinking of (although it sounds like you’re maybe making a more sweeping assumption than me?)
My current rough sense is that a rate limit of 3 comments per post per day (maybe with an additional wordcount based limit per post per day), would actually be pretty reasonable at curbing the things I’m worried about (for users that seem particularly prone to causing demon threads)
Complaints by whom? And why are these complaints significant?
Are you taking the stance that all or most of these complaints are valid, i.e. that the things being complained about are clearly bad (and not merely dispreferred by this or that individual LW member)?
(See also this recent comment, where I argue that at least one particular characterization of my commenting activity is just demonstrably inconsistent with reality.)
Here’s a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren’t on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can’t have particularly important types of conversations here.
I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)
I think there’s probably at least 5 more people who complained about you by name who I don’t think have particularly legible credibility beyond “being some LessWrong users.”
I’m thinking about my reply to “are the complaints valid tho?”. I have a different ontology here.
There are some problems with this as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.
I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I’m sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.
Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it’s worth putting in effort here to figure out if things could be better.
It is evidence that there is some sort of problem. It’s not clear evidence about what should be done about it, about what “better” means specifically. Instituting ways of not talking about the problem anymore doesn’t help with addressing it.
It didn’t seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.
If we speak precisely… in what way would they be the former without being the latter? Like, if I now think it’s more worth figuring out whether things could be better, presumably that’s because I now think it’s more likely that things could be better?
(I suppose I could also now think the amount-they-could-be-better, conditional on them being able to be better, is higher; but the probability that they could be better is unchanged. Or I could think that we’re currently acting under the assumption that things could be better, I now think that’s less likely so more worth figuring out whether the assumption is wrong. Neither seems like they fit in this case.)
Separately, I think my model of Said would say that he was not complaining, he was merely asking questions (perhaps to try to decide whether there was something to complain about, though “complain” has connotations there that my model of Said would object to).
So, if you think the mods are doing something that you think they shouldn’t be, you should probably feel free to say that (though I think there are better and worse ways to do so).
But if you think Said thinks the mods are doing something that Said thinks they shouldn’t be… idk, it feels against-the-spirit-of-Said to try to infer that from his comment? Like you’re doing the interpretive labor that he specifically wants people not to do.
My comment wasn’t well written, I shouldn’t have used the word “complaining” in reference to what Said was doing. To clarify:
As I see it, there are two separate claims:
That the complaints prove that Said has misbehaved (at least a little bit)
That the complaints increase the probability that Said has misbehaved
Said was just asking questions—but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1.
Jefftk seems to be speaking about claim 2. So, his comment doesn’t seem like a direct response to Said’s comment, although the point is still a relevant one.
(fyi I do plan to respond to this, although don’t know how satisfying it’ll be when I do)
Warning to Duncan
(See also: Raemon’s moderator action on Said)
Since we were pretty much on the same page, Raemon delegated writing this warning to Duncan to me, and signed off on it.
Generally, I am quite sad if, when someone points/objects to bad behavior, they end up facing moderator action themselves. It doesn’t set a great incentive. At the same time, some of Duncan’s recent behavior also feels quite bad to me, and to not respond to it would also create a bad incentive – particularly if the undesirable behavior results in something a person likes.
Here’s my story of what happened, building off of some of Duncan’s own words and some endorsement of something I said previous exchange with him:
Duncan felt that Said engaged in various behaviors that hurt him (confident based on Duncan’s words) and were in general bad (inferred from Duncan writing posts describing why those behaviors are bad). Such bad/hurtful behaviors include strawmanning, psychologizing at length, and failing to put in symmetric effort. For example, Said argued that Duncan banned him from his posts because Said disagreed. I am pretty sympathetic to these accusations against Said (and endorse moderation action against Said) and don’t begrudge Duncan any feelings of frustration and hurt he might have.
Duncan additionally felt that the response of other users (e.g. in voting patterns) and moderators was not adequate.
and
Given what he felt to be the inadequate response from others, Duncan decided to defend himself (or try to cause others to defend him). His manner of doing so, I feel, generates quite a few costs that warrant moderator action to incentivize against Duncan or others imposing these costs on the site and mods in the future.
The following is a summary of what I consider Duncan’s self-defensive behavior (not necessarily in order of occurrence).
Arguing back and forth in the comments
Banned Said from his posts
Argued more in comments not on his own posts
Requested that the moderators intervene, and quickly (offsite)
Wrote a top-level post at least somewhat in response to Said (planned to write it anyhow, but prioritized based on Said interactions), and it was interpreted by others as being about Said and calling for banning him.
In further comments, identifies statements that he says cause he to categorize and treat Said as an intentional liar.
Says he’d prefer a world where both he and Said were banned than neither.
Accuses the LessWrong moderators of not maintaining a tended garden, and that perhaps should just leave.
Individually and done occasionally, I think many of these actions are fine. The “ban users from your posts” feature is there so that you don’t have to engage with a user you don’t want to, as a mod, I appreciate people flagging behavior they think isn’t good, writing top-level posts describing why you think certain behaviors are bad (in a timeless/universal way) also is good, and if the site doesn’t make you feel safe, saying so and leaving also seems legit (I’m sad if this is true, but I’d like to know it rather than someone leaving silently).
Requesting quick moderator intervention, denouncing that he categorizes and treats Said as an intentional liar, saying that he’d prefer both Said himself be banned than neither, and writing a post that at least some people interpreted as calling for Said to be banned, feel like a pretty “aggressive” response. Combined with the other behaviors that are more usually okay but still confrontational, it feels to me like Duncan’s response was quite escalatory in a way that generates costs.
First, I think it’s bad to have users on the site who others are afraid of getting into conflict with. Naturally, people weigh the expect value and expected costs from posting/commenting/etc, and I know that with high confidence myself and at least three others (and I assume quite a few more) are pretty afraid to get into conflict with Duncan, because Duncan argues long and hard and generally invests a lot of time to defend himself against what feels like harm, e.g. all the ways he has done so on this occasion. I assume here that others are similar to me (not everyone, but enough) in being quite wary of accidentally doing something Duncan reacts to as a terrible norm violation, because doing so can result in a really unpleasant conflict (this has happened twice that I know of with other LW team members).
I recognize that Duncan feels like he’s trying to make LessWrong a place that’s net positive for him to contribute, and does so in some prosocial ways (e.g. writes Basics of Rationalist Discourse), but I need to call out ways in which his manner doing also causes harm, e.g. a climate of fear where people won’t express disagreement because defending themselves against Duncan would be extremely exhausting and effortful.
This is worsened by the fact that often Duncan is advocating for norms. If he was writing about trees and you were afraid to disagree, it might not be a big deal. But if he is arguing norms for your community, it’s worse if you think he might be advocating something wrong but disagreeing feels very risky.
Second, Duncan’s behavior directly or indirectly requires moderator attention, sometimes fairly immediately (partly because he’s requested quick response, and partly because if there’s an overt conflict between users, mods really ought to chime in sooner rather than later). I would estimate that the team has collectively spent 40+ hours on moderation over two weeks in response to recent events (some of that I place on Said who probably needed moderation anyway), but the need to drop other work and respond to the conflict right now is time-consuming and disruptive. Not counting exactly, it feels like this has happened periodically for several years with Duncan.
Duncan is a top contributor to the site, and I think for the most part advocates for good norms, so it feels worth it to devote a good amount of time and attention to his requests, but only so much. So there’s a cost there I want to call out that was incurred from recent behavior. (I think that if Duncan had notified us of really not liking some of Said’s behavior and point to a thread, said he’d like a response within two months or else he might leave the site – that would have been vastly less costly to us than what happened.)
I don’t think we’ve previously pointed out the costs here, so it’s fair to issue a warning rather than any harsher action.
Duncan, if you do things that impose what feel like to me costs of:
Taking actions such that I predict users will be afraid to engage with you, at the same time as you advocate norms
You demand fast responses to things you don’t like, thereby costing a lot of resources from mods in excess of what seems reasonable (and you’re basically out of budget for a long while now)
The moderators will escalate moderator action in response, e.g. rate limits or bans of escalating duration.
A couple of notes of clarification. I feel that this warning is warranted on the basis of Duncan’s recent behavior re: Said alone, but my thinking is informed by similar-ish patterns from the past that I didn’t get into here. Also for other users wondering if this warning could apply to them. Theoretically, yes, but I think most users aren’t at all close to doing the things here that I don’t like. If you have not previously had extensive engagement with the mods about a mix of your complaints and behavior, then what I’m describing here as objectionable is very unlikely to be something you’re doing.
To close, I’ll say I’m sad that the current LessWrong feels like somewhere where you, Duncan, need to defend yourself. I think many of your complaints are very very reasonable, and I wish I had the ability to immediately change things. It’s not easy and there are many competing tradeoffs, but I do wish this was a place where you felt like it was entirely positive to contribute.
Just noting as a “for what it’s worth”
(b/c I don’t think my personal opinion on this is super important or should be particularly cruxy for very many other people)
that I accept, largely endorse, and overall feel fairly treated by the above (including the week suspension that preceded it).
Moderation action on Said
(See also: Ruby’s moderator warning for Duncan)
I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such.
I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.
Here’s a quick overview of how I think about Said moderation:
Re: Recent Duncan Conflict.
I think he did some moderation-worthy things in the recent conflict with Duncan, but a) so did Duncan, and I think there’s a “it takes two-to-tango” aspect of demon threads, b) at most, those’d result in me giving one or both of them a 1-week ban and then calling it a day. I basically endorse Vaniver’s take on some object level stuff. I have a bit more to say but not much.
Overall pattern.
I think Said’s overall pattern of commenting includes a mix of “subtly enforcing norms that aren’t actual LW site norms (see below)”, “being pretty costly to interact with, in a way feels particularly ‘like a trap’”, and “in at least some domains, being consistently not-very-correct in his implied criticisms”. I think each of those things are at least a little bad in isolation (though not necessarily moderation-worthy). But I think they become worse than the sum-of-their-parts. If he was consistently doing the entire pattern, I would either ban him, or invent new tools to either alleviate-the-cost or tax-the-behavior in a less heavyhanded way.
Not sufficient corresponding upside
I’d be a lot less wary of the previous pattern if I felt like Said was also contributing significantly more value to LessWrong. [Edit: I do, to be clear, think Said has contributed significant value, both in terms of keeping the spirit of the sequences alive in the world ala readthesequences.com, and through being a voice with a relatively rare (these days) perspective that keeps us honest in important ways. But I think the costs are, in fact, really high, and I think the object level value isn’t enough to fully counterbalance it]
Prior discussion and warnings.
We’ve had numerous discussions with Said about this (I think we’ve easily spent 100+ hours of moderator-time on it, and probably more like 200), including an explicit moderation warning.
Few recent problematic pattern instances.
That all said, prior to this ~month’s conflict with Duncan, I don’t have a confident belief that Said has recently strongly embodied the pattern I’m worried about. I think it was more common ~5 years ago. I cut Said some slack for the convo with Duncan because I think Duncan is kind of frustrating to argue with.
THAT said, I think it’s crept up at least somewhat occasionally in the past 3 years, and having to evaluate whether it’s creeping up to an unacceptable level is fairly costly.
THAT THAT said, I do appreciate that the first time we gave him an explicit moderation notice, I don’t think we had any problems for ~3 years afterwards.
Strong(ish) statement of intent
Said’s made a number of comments that make me think he would still be doing a pattern I consider problematic if the opportunity arose. I think he’ll follow the letter of the law if we give it to him, but it’s difficult to specify a letter-of-the-law that does the thing I care about.
A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)
For now, after a bunch of discussion with other moderators, reading the thread-so-far, and talking with various advisors – my current call is giving Said a rate limit of 3-comments-per-post-per-week. See this post on the general philosophy of rate limiting as a moderation tool we’re experimenting with. I think there’s a decent chance we’ll ship some new features soon that make this actually a bit more lenient, but don’t want to promise that at the moment.
I am not very confident in this call, and am open to more counterarguments here, from Said or others. I’ll talk more about some of the reasoning here at the end of this comment. But I want to start by laying out some more background reasoning for the entire moderation decision.
In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.
(Note: one counterproposal I’ve seen is to develop a rate-limit based entirely on karma rather than moderator judgment, and that it is better to do this than to have moderators make individual judgment calls about specific users. I do think this idea has merit, although it’s hard to build. I have more to say about it at the end)
Said Patterns
3 years ago Habryka summarized a pattern we’d seen a lot:
I think the most central of this is in this thread on circling, where AFAICT Said asked for examples of some situations where social manipulation is “good.” Qiaochu and Sarah Constantin offer some examples. Said responds to both of them by questioning their examples and doubting their experience in a way that is pretty frustrating to respond to (and in the Sarah case seemed to me like a central example of Said missing the point, and the evo-psych argument not even making sense in context, which makes me distrust his taste on these matters). [1, 2]
I don’t actually remember more examples of that pattern offhand. I might be persuaded that I overupdated on some early examples. But after thinking a few days, I think a cruxy piece of evidence on how I think it makes sense to moderate Said is this comment from ~3 years ago:
For completeness, Said later elaborates:
Habryka and Said discussed it at length at the time.
I want to reiterate that I think asking for examples is fine (and would say the same thing for questions like “what do you mean by ‘spirituality’?” or whatnot). I agree that a) authors generally should try to provide examples in the first place, b) if they don’t respond to questions about examples, that’s bayesian evidence about whether their idea will ground out into something real. I’m fairly happy with clone of saturn’s variation on Said’s statement, that if the author can’t provide examples, “the post should be regarded as less trustworthy” (as opposed to “author should be interpreted as ignorant”), and gwern’s note that if they can’t, they should forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
The thing I object fairly strongly to is “there is an obligation on the part of the author to respond.”
I definitely don’t think there’s a social obligation, and I don’t think most LessWrongers think that. (I’m not sure if Said meant to imply that). Insofar as he means there’s a bayesian obligation-in-the-laws-of-observation/inference, I weakly agree but think he overstates it: there’s a lot of reasons an author might not respond (“belief that a given conversation won’t be productive,” “volume of such comments,” “trying to have a 202 conversation and not being interested in 101 objections,” and simple opportunity cost).
From a practical ‘things that the LessWrong culture should socially encourage people to do’, I liked Vladimir’s point that:
i.e. I want there to be good criticism on LW, and think that people feeling free to ignore criticism encourages more good criticism, in part by encouraging more posts and engagement.
It’s been a few years and I don’t know that Said still endorses the obligation phrasing, but much of my objection to Said’s individual commenting stylistic choices has a lot to do with reinforcing this feeling of obligation. I also think (less confidently) that they get an impression that Said thinks if an author hasn’t answered a question to his satisfaction (as an example of a reasonable median LW user), they should feel an [social] obligation to succeed at that.
Whether he intends this or not, I think it’s an impression that comes across, and which exerts social pressure, and I think this has a significant negative effect on the site.
I’m a bit confused about how to think about “prescribed norms” vs “good ideas that get selected on organically.” In a previous post Vladmir_Nesov argues that prescribing norms generally doesn’t make sense. Habryka had a similar take yesterday when I spoke with him. I’m not sure I agree (and some of my previous language here has probably assumed a somewhat more prescriptivist/top-down approach to moderating LessWrong that I may end up disendorsing after chatting more with Habryka)
But even in a more organic approach to moderation, I, Habryka and Ruby think it’s pretty reasonable for moderators to take action to prevent Said from implying that there’s some kind of norm here and exerting pressure around it on other people’s comment sections, when, AFAICT, there is no consensus of such a norm. I predict a majority of LessWrong members would not agree with that norm, either on normative-Bayesian terms nor consequentialist social-norm-design terms. (To be clear I think many people just haven’t thought about it at all, but expect them to at least weakly disagree when exposed to the arguments. “What is the actual collective endorsed position of the LW commentariat” is somewhat cruxy for me here)
Rate-limit decision reasoning
If this was our first (or second or third) argument with Said over this, I’d think stating this clearly and giving him a warning would be a reasonable next action. Given that we’ve been intermittently been arguing about this for 5 years, spending a hundred+ hours of mod time discussing it with him, it feels more reasonable to move to an ultimatum of “somehow, Said needs to stop exerting this pressure in other people’s comment threads, or moderators will take some kind of significant action to either limit the damage or impose a tax on it.”
If we were limited to our existing moderator tools, I would think it reasonable to ban him. But we are in the middle of setting up a variety of rate limiting tools to generally give mods more flexibility, and avoid being heavier-handed than we need to be.
I’m fairly open to a variety of options here. FWIW, I am interested in what Said actually prefers here. (I expect it is not a very fun conversation to be asked by the people-in-power “which way of constraining you from doing the thing you think is right seems least-bad to you?”, but, insofar as Said or others have an opinion on that I am interested)
I am interested in building a automated tool that detects demon threads and rate limits people based on voting patterns.. I most likely want to try to build such a tool regardless of what call we make on Said, and if I had a working version of such a tool I might be pretty satisfied with using it instead. My primary cruxes are
a) I think it’s a lot harder to build and I’m not sure we can succeed,
b) I do just think it’s okay for moderators to make judgment calls about individual users based on longterm trends. That’s sort of what mods are for. (I do think for established users it’s important for this process to be fairly costly and subjected to public scrutiny)
But for now, after chatting with Oli and Ruby and Robert, I’m implementing the 3-comments-per-post-per-week rule for Said. If we end up having time to build/validate an organic karma-based rate limit that solves the problem I’m worried about here, I might switch to that. Meanwhile some additional features I haven’t shipped yet, which I can’t make promises about, but which I personally think would be god to ship soon include:
There’s at least a boolean flag for individual posts so authors can allow “rate limited people can comment freely”, and probably also a user-setting for this. Another possibility is a user-specific whitelist, but that’s a bit more complicated and I’m not sure if there’s anyone who would want that who wouldn’t want the simpler option.
I’d ideally have this flag set on this post, and probably on other moderation posts written by admins.
Rate-limited users in a given comment section have a small icon that lets you know they’re rate-limited, so you have reasonable expectations of when they can reply.
Updating the /moderation page to list rate limited users, ideally with some kind of reason / moderation-warning.
Updating rate limits to ensure that users can comment as much as they want on their own posts (we made a PR for this change a week ago and haven’t shipped it yet largely because this moderation decision took a lot of time)
Some reasons for this-specific-rate-limit rather than alternatives are:
3 comments within a week is enough for an initial back-and-forth where Said asks questions or makes a critique, the author responds, Said responds-to-the-response. (i.e. allowing the 4 layers of intellectual conversation, and getting the parts of Said comments that most people agree are valuable)
It caps the conversation out before it can spiral into unproductive escalatory thread.
It signals culturally that the problem here isn’t about initial requests for examples or criticisms, it’s about the pattern that tends to play out deeper in threads. I think it’s useful for this to be legible both to authors engaging with Said, and other comments inferring site norms (i.e. some amount of Socrates is good, too much can cause problems)
If 3 comments isn’t enough to fully resolve a conversation, it’s still possible to follow up eventually.
Said can still write top level posts arguing for norms that he thinks would be better, or arguing about specific posts that he thinks are problematic.
That all said, the idea of using rate-limits as a mod-tool is pretty new, I’m not actually sure how it’ll play out. Again, I’m open to alternatives. (And again, see this post for more thoughts on rate limiting)
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
This sounds drastic enough that it makes me wonder, since the claimed reason was that Said’s commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it’s permanent, it has a very rubber room vibe to it, where you don’t outright ban someone but continually humiliate them if they keep coming by and wish they’ll eventually get the hint.
A background model I want to put out here: two frames that feel relevant to me here are “harm minimization” and “taxing”. I think the behavior Said does has unacceptably large costs in aggregate (and, perhaps to remind/clarify, I think a similar-in-some-ways set of behaviors I’ve seen Duncan do also would have unacceptably large costs in aggregate).
And the three solutions I’d consider here, at some level of abstraction, are:
So-and-so agrees to stop doing the behavior (harder when the behavior is subtle and multifaceted, but, doable in principle)
Moderators restrict the user such that they can’t do the behavior to unacceptable degrees
Moderators tax the behavior such that doing-too-much-of-it is harder overall (but, it’s still something of the user’s choice if they want to do more of it and pay more tax).
All three options seem reasonable to me apriori, it’s mostly a question of “is there a good way to implement them?”. The current rate-limit-proposal for Said is mostly option 2. All else being equal I’d probably prefer option 3, but the options I can think of seem harder to implement and dev-time for this sort of thing is not unlimited.
Quick update for now: @Said Achmiz’s rate limit has expired, and I don’t plan to revisit applying-it-again unless a problem comes up.
I do feel like there’s some important stuff left unresolved here. @Zack_M_Davis’s comment on this other post asks some questions that seem worth answering.
I’d hoped to write up something longer this week but was fairly busy, and it seemed better to explicitly acknowledge it. For the immediate future I think improving on the auto-rate-limits and some other systemic stuff seems more important that arguing or clarifying the particular points here.
It seems like the natural solution here would be something that establishes this common knowledge. Something like the twitter “community notes” being attached to relevant comments that says something like “There is no obligation to respond to this comment, please feel comfortable ignoring this user if you don’t feel he will productive to engage with. Discussion here”
Yeah I did list that as one of my options I’d consider in the previous announcement.
A problem I anticipate is that it’s some combination of ineffective, and also in some ways a harsher punishment. But if Said actively preferred some version of this solution I wouldn’t be opposed to doing it instead of rate-limiting.
Forgive me for making what may be an obvious suggestion which you’ve dismissed for some good reason, but… is there, actually, some reason why you can’t attach such a note to all comments? (UI-wise, perhaps as a note above the comment form, or something?) There isn’t an obligation, in terms of either the site rules or the community norms as the moderators have defined them, to respond to any comment, is there? (Perhaps with the exception of comments written by moderators…? Or maybe not even those?)
That is, it seems to me that the concern here can be characterized as a question of communicating forum norms to new participants. Can it not be treated as such? (It’s surely not unreasonable to want community members to refrain from actively interfering with the process of communicating rules and norms to newcomers, such as by lying to them about what those rules/norms are, or some such… but the problem, as such, is one which should be approached directly, by means of centralized action, no?)
I think it could be quite nice to give new users information about what site norms are and give a suggested spirit in which to engage with comments.
(Though I’m sure there’s lots of things it’d be quite nice to tell new users about the spirit of the site, but there’s of course bandwidth limitations on how much they’ll read, so just because it’s an improvement doesn’t mean it’s worth doing.)
If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X?
(I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?)
Or rate-limiting, or applying any other such moderation action to.
This is not what I said though.
Now that you’ve clarified your objection here, I want to note that this does not respond to the central point of the grandparent comment:
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don’t involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to read. Adding this warning would have to displace one of the existing guidelines, which seems very unlikely to be worth it.
Banner blindness is real, and if you put the same block of text anywhere, people will quickly learn to ignore them. This has already happened with the existing moderation guidelines and frontpage guidelines.
If you have a sign in a space that says “don’t scream at people” but then lots of people do actually scream at you in that room, this doesn’t actually really help very much, and more likely just reduces trust in your ability to set any kind of norm in your space. I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience. The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
My guess is you will respond to this with some statement of the form “but I have said many times that I do not think the norms are such that you have an obligation to respond”, but man, subtext and text do just differ frequently in communication, and the subtext of your comments does really just tend to communicate the opposite. A way out of this situation might be that you just include a disclaimer in the first comment on every post, but I can also imagine that not working for a bunch of messy reasons.
I can also imagine you responding to this with “but I can’t possible create an obligation to respond, the only people who can do that are the moderators”, which seems to be a stance implied by some other comments you wrote recently. This stance seems to me to fail to model how actual social obligations develop and how people build knowledge about social norms in a space. The moderators only set a small fraction of the norms and culture of the site, and of course individual users can create an obligation for someone to respond.
I am not super interested in going into depth here, but felt somewhat obligated to reply since your suggested had some number of upvotes.
First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course only the moderators can do that. I can’t even create social norms that responses are expected, if the moderators don’t support me in this (and especially if they actively oppose me). I’ve never said that such a formal obligation or social norm exists; and if I ever did say that, all it would take is a moderator posting a comment saying “no, actually” to unambiguously controvert the claim.
But on the other hand, I can’t create an epistemic obligation to respond, either—because it already either exists or already doesn’t exist, regardless of what I think or do.
So, you say:
If someone writes a post and someone else (regardless of who it is!) writes a comment that says “what are some examples?”, then whether the post author “faces humiliation” (hardly the wording I’d choose, but let’s go with it) in front of the Less Wrong audience if they don’t respond is… not something that I can meaningfully affect. That judgment is in the minds of the aforesaid audience. I can’t make people judge thus, nor can I stop them from doing so. To ascribe this effect to me, or to any specific commenter, seems like willful denial of reality.
This would be a highly unreasonable response. And the correct counter-response by moderators, to such a question, would be:
“Because users can’t ‘create a strong obligation for you to respond’. We’ve made it clear that you have no such obligation. (And the commenters certainly aren’t claiming otherwise, as you can see.) It would be utterly absurd for us to moderate or delete these comments, just because you don’t want to respond to them. If you feel that you must respond, respond; if you don’t want to, don’t. You’re an adult and this is your decision to make.”
(You might also add that the downvote button exists for a reason. You might point out, additionally, that low-karma comments are hidden by default. And if the comments in question are actually highly upvoted, well, that suggests something, doesn’t it?)
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don’t think I am saying particularly complicated things, and I think I’ve communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we’ll continue to take some moderator actions until things look better by our models. I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
This seems like an odd response.
In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked.
In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept.
You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!
It’s important for users to know when it comes up. It doesn’t come up much except with you.
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I’ve been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Oh? My mistake, then. Should it be “because their comments have, on some occasions, misled users into falsely believing X”?
(It’s not clear to me, I will say, whether you are claiming this is actually something that ever happened. Are you? I will note that, as you’ll find if you peruse my comment history, I have on more than one occasion taken pains to explicitly clarify that Less Wrong does not, in fact, have a norm that says that responding to comments is mandatory, which is the opposite of misleading people into believing that such a norm exists…)
No. This is still oversimplifying the issue, which I specifically disclaimed. Ben Pace gives a sense of it here:
The problem is implicit enforcement of norms. Your stated beliefs do help alleviate this but only somewhat. And, like Ben also said in that comment, from a moderator perspective it’s often correct to take mod action regardless of whether someone meant to do something we think has had an outsized harm on the site.
I’ve now spent (honestly more than) the amount of time I endorse on this discussion. I am still mulling a lot over the overall discussion over, but in the interest of declaring this done for now, I’m declaring that we’ll leave the rate limit in place for ~3 months, and re-evaluate then. I feel pretty confident doing this because it seems commensurate with the original moderation warning (i.e. a 3 month rate limit seems similar to me in magnitude to a 1-month ban, and I think Said’s comments in the Duncan/Said conflict count as a triggering instance).
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well, but I think we explained it well enough that it should be fairly obvious to you why your comment here is missing the point, and if it’s not, I don’t really know what to do about that.
Alright, fair enough, so then…
… but then my next question is:
What the heck is “implicit enforcement of norms”??
To be quite honest, I think you have barely explained it at all. I’ve been trying to get an explanation out of you, and I have to say: it’s like pulling teeth. It seems like we’re getting somewhere, finally? Maybe?
You’re asking me to change my commenting behavior. I can’t even consider doing that unless I know what you think the problem is.
So, questions:
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This “implicit enforcement of norms” (whatever it is)—is it a problem additionally to making false claims about what norms exist?
If the answer to #2 is “yes”, then what is your response to my earlier comments pointing out that no such false claims took place?
A norm is a pattern of behavior, something people can recognize and enact. Feeding a norm involves making a pattern of behavior more available (easy to learn and perceive), and more desirable (motivating its enactment, punishing its non-enactment). A norm can involve self-enforcement (here “self” refers to the norm, not to a person), adjoining punishment of non-enforcers and reward of enforcers as part of the norm. A well-fed norm is ubiquitous status quo, so available you can’t unsee it. It can still be opted-out of, by not being enacted or enforced, at the cost of punishment from those who enforce it. It can be opposed by conspicuously doing the opposite of what the norm prescribes, breaking the pattern, thus feeding a new norm of conspicuously opposing the original norm.
Almost all anti-epistemology is epistemic damage perpetrated by self-enforcing norms. Tolerance is boundaries against enforcement of norms. Intolerance of tolerance breaks it down, tolerating tolerance allows it to survive, restricting virality of self-enforcing norms. The self-enforcing norm of tolerance that punishes intolerance potentially exterminates valuable norms, not obviously a good idea.
So there is a norm of responding to criticism, its power is the weight of obligation to do that. It always exists in principle, at some level of power, not as categorically absent or present. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
(Edit: Some ninja-editing, Said quoted the pre-edit version of third paragraph. Also fixed the error in second paragraph where I originally equivocated between tolerating tolerance and self-enforcing tolerance.)
Perhaps, for some values of “feeding that norm” and “[not] not depriving it of influence”. But is this “enforcement”? I do not think so. As far as I can tell, when there is a governing power (and there is surely one here), enforcement of the power’s rules can be done by that power only. (Power can be delegated—such as by the LW admins granting authors the ability to ban users from their posts—but otherwise, it is unitary. And such delegated power isn’t at all what’s being discussed here, as far as I can tell.)
That’s fair, but I predict that the central moderators’ complaint is in the vicinity of what I described, and has nothing to do with more specific interpretations of enforcement.
If so, then that complaint seems wildly unreasonable. The power of moderators to enforce a norm (or a norm’s opposite) is vastly greater than the power of any ordinary user to subtly influence the culture toward acceptance or rejection of a norm. A single comment from a moderator so comprehensively outweighs the influence, on norm-formation, of even hundreds of comments from any ordinary user, that it seems difficult to believe that moderators would ever need to do anything but post the very occasional short comment that links to a statement of the rules/norms and reaffirms that those rules/norms are still in effect.
(At least, for norms of the sort that we’re discussing. It would be different for, e.g., “users should do X”. You can punish people for breaking rules of the form “users should never do X”; that’s easy enough. Rules/norms of the form “users don’t need to do X”—i.e., those like the one we’ve been discussing—are even easier; you don’t need to punish anything, just occasionally reaffirm or remind people that X is not mandatory. But “users should do X” is tricky, if X isn’t something that you can feasibly mandate; that takes encouragement, incentives, etc. But, of course, that isn’t at all the sort of thing we’re talking about…)
Everyone can feed a norm, and direct action by moderators can be helpless before strong norms, as scorched-earth capabilities can still be insufficient for reaching more subtle targets. Thus discouraging the feeding of particular norms rather than going against the norms themselves.
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
What you are describing would have to be a very well-entrenched and widespread norm, supported by many users, and opposed by few users. Such a thing is perhaps possible (I have my doubts about this; it seems to me that such a hypothetical scenario would also require, for one thing, a lack of buy-in from the moderators); but even if it is—note how far we have traveled from anything resembling the situation at hand!
Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
Well, I’m certainly all for that.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
I wholly agree.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
What would any of what you’re alluding to look like, more concretely…?
(Of course I also object to the term “de-escalation” here, due to the implication of “escalation”, but maybe that’s beside the point.)
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
Some examples are two comments up, as well as your list of things that don’t work. Another move not mentioned so far is deciding to exit certain conversations.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)
Also, it almost goes without saying, but: I think it is extremely unhelpful and misleading to refer to the sort of thing you describe as “enforcement”. This is not a matter of “more [or less] specific interpretation”; it’s just flatly not the same thing.
This might be a point of contention, but honestly, I don’t really understand and do not find myself that curious about a model of social norms that would produce the belief that only moderators can enforce norms in any way, and I am bowing out of this discussion (the vast majority of social spaces with norms do not even have any kind of official moderator, so what does this model predict about just like the average dinner party or college class).
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space and can maybe engage with Said on explaining this, and I would appreciate someone else jumping in and explaining those models, but I don’t have the time and patience to do this.
All right, I’ll give it a try (cc @Said Achmiz).
Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
(Separately from my longer reply: I do want to thank you for making the attempt.)
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
Definitely. (As I’ve alluded to earlier in this comment section, I am quite familiar with this problem from the administrator’s side.)
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
Just for the record, your first comment was quite good at capturing some of the models that drive me and the other moderators.
This one is not, which is fine and wasn’t necessarily your goal, but I want to prevent any future misunderstandings.
I’m super not interested in putting effort into talking about this with Said. But a low-effort thing to say is: my review of Order Without Law seems relevant. (And the book itself moreso, but that’s less linkable.)
I do recall reading and liking that post, though it’s been a while. I will re-read it when I have the chance.
But for now, a quick question: do you, in fact, think that the model described in that post applies here, on Less Wrong?
(If this starts to be effort I will tap out, but briefly:)
It’s been a long time since I read it too.
I don’t think there’s a specific thing I’d identify as “the model described in that post”.
There’s a hypothesis that forms an important core of the book and probably the review; but it’s not the core of the reason I pointed to it.
I do expect bits of both the book and the review apply on LW, yes.
Well, alright, fair enough.
Could you very briefly say more about what the relevance is, then? Is there some particular aspect of the linked review of which you think I should take note? (Or is it just that you think the whole review is likely to contain some relevant ideas, but you don’t necessarily have any specific parts or aspects in mind?)
Sorry. I spent a few minutes trying to write something and then decided it was going to be more effort than I wanted, so...
I do have something in mind, but I apparently can’t write it down off the cuff. I can gesture vaguely at the title of the book, but I suppose that’s unlikely to be helpful. I don’t have any specific sections in mind.
(I think I’m unlikely to reply again unless it seems exceptionally likely that doing so will be productive.)
Alright, no worries.
Dinner parties have hosts, who can do things like: ask a guest to engage or not engage in some behavior; ask a guest to leave if they’re disruptive or unwanted; not invite someone in the first place; in the extreme, call the police (having the legal standing to do so, as the owner of the dwelling where the party takes place).
College classes have instructors, who can do things like: ask a student to engage or not engage in some behavior; ask a student to leave if they’re disruptive; cause a student to be dropped from enrollment in the course; call campus security to eject the student (having the organizational and legal standing to do so, as an employee of the college, who is granted the mandate of running the lecture/course/etc.).
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
I, too, am capable of describing such a model.
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Upvotes/downvotes are very slightly like “enforcement”. (But of course we’re not talking about upvotes/downvotes here.)
Banning a user from your posts is a bit more like “enforcement”. (But we’re definitely not talking about that here.)
Given the existence of moderators on Less Wrong, I do not, indeed, see any way to describe anything that I have ever done as “enforcement” of anything. It seems to me that such a claim is incoherent.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
One last reply:
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Experiencing those bottom-up norms is a shared experience since almost everyone went through high-school and college, so seems like a good reference.
Of course this is true; it is not just the instructor, but also the college administration, etc., that function as the setter and enforcer of norms.
But it sure isn’t the students!
(And this is even more true in high school. The students have no power to set any norms, except that which is given them by the instructor/administration/etc.—and even that rarely happens.)
Have you been to an American high school and/or watched at least one movie about American high schools?
I have done both of those things, yes.
EDIT: I have also attended not one but several (EDIT 2: four, in fact) American colleges.
The plot point of many high school movies is often about what is and isn’t acceptable to do, socially. For example, Regina in Mean Girls enforced a number of rules on her clique, and attempted with significant but not complete success to enforce it on others.
I do think it would be useful for you to say how much time should elapse without a satisfactory reply by some representative members of this 95% before we can reasonably evaluate whether this prediction has been proven true.
Oh, the central latent variable in my uncertainty here is “is anyone willing to do this?” not “is anyone capable of this?”. My honest guess is the answer to that is “no” because this kind of conversation really doesn’t seem fun, and we are 7 levels deep into a 400 comment post.
My guess is if you actively reach out and put effort into trying to get someone to explain it to you, by e.g. putting out a bounty, or making a top-level post, or somehow send a costly signal that you are genuinely interested in understanding, then I do think there is a much higher chance of that, but I don’t currently expect that to happen.
You do understand, I hope, how this stance boils down to “we want you to stop doing a thing, but we won’t explain what that thing is; figure it out yourself”?
No, it boils down to “we will enforce consistent rules and spend like 100+ hours trying to explain them if an established user is confused, and if that’s not enough, then I guess that’s life and we’ll move on”.
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate, given the really quite extraordinary amount of time we have spent over the years trying to get our models across. You can of course complain about the ineffectuality of our efforts to explain, but I do not think you can deny the effort, and I do not currently know what to do that doesn’t involve many additional hours of effort.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
I’ve been told (and only after much effort on my part in trying to get an answer) that the problem being solved here is something called “(implicit) enforcement of norms” on my part. I’ve yet to see any comprehensible (or even, really, seriously attempted) explanation of what that’s supposed to mean, exactly, and how any such thing can be done by a (non-moderator) user of Less Wrong. You’ve said outright that you refuse to attempt an explanation. “Unwilling to explain what the thing is” seems entirely accurate.
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
It does not surprise me that you cannot summarize them or restate them in a way that shows understanding them, which is why more effort on explaining them does not seem worth it. The concepts here are also genuinely kind of tricky, and we seem to be coming from very different perspectives and philosophies, and while I do experience frustration, I can also see why this looks very frustrating for you.
I agree that I personally haven’t put a ton of effort (though like 2-3 hours for my comments with Zack which seem related) at this specific point in time, though I have spent many dozens of hours in past years, trying to point to what seems to me the same disagreements.
But which are not, like… stated anywhere? Like, in some sort of “what are the rules of this website” page, which explains these rules?
Don’t you think that’s an odd state of affairs, to put it mildly?
The concept of “ignorance of the law is no excuse” was mentioned earlier in this discussion, and it’s a reasonable one in the real world, where you generally can be aware of what the law is, if you’re at all interested in behaving lawfully[1]. If you get a speeding ticket, and say “I didn’t know I was exceeding the speed limit, officer”, the response you’ll get is “signs are posted; if you didn’t read them, that’s no excuse”. But that’s because the signs are, in fact, posted. If there were no signs, then it would just be a case of the police pulling over whoever they wanted, and giving them speeding tickets arbitrarily, regardless of their actual speed.
You seem to be suggesting that Less Wrong has rules (not “norms”, but rules!), which are defined only in places like “long, branching, deeply nested comment threads about specific moderation decisions” and “scattered over years of discussion with some specific user(s)”, and which are conceptually “genuinely kind of tricky”; but that violating these rules is punishable, like any rules violation might be.
Does this seem to you like a remotely reasonable way to have rules?
But note that this, famously, is no longer true in our society today, which does indeed have some profoundly unjust consequences.
I think we’ve tried pretty hard to communicate our target rules in this post and previous ones.
The best operationalization of them is in this comment, as well as the moderation warning I made ~5 years ago: https://www.lesswrong.com/posts/9DhneE5BRGaCS2Cja/moderation-notes-re-recent-said-duncan-threads?commentId=y6AJFQtuXBAWD3TMT
These are in a pinned moderator-top-level comment on a moderation post that was pinned for almost a full week, so I don’t think this counts as being defined in “long, branching, deeply nested comment threads about specific moderation decisions”. I think we tried pretty hard here to extract the relevant decision-boundaries and make users aware of how we plan to make decisions going forward.
We are also thinking about how to think about having site-wide moderation norms and rules that are more canonical, though I share Ruby’s hesitations about that: https://www.lesswrong.com/posts/gugkWsfayJZnicAew/should-lw-have-an-official-list-of-norms
I don’t know of a better way to have rules than this. As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
EDIT: Why do my comments keep double-posting? Weird.
… that comment is supposed to communicate rules?!
It says:
The only thing that looks like a rule here is “don’t imply people have an obligation to engage with [your] comments”. Is that the rule you’ve been talking about? (I asked this of Raemon and his answer was basically “yes but not only”, or something like that.)
And the rest pretty clearly suggests that there isn’t a clearly defined rule here.
The mod note from 5 years ago seems to me to be very clearly not defining any rules.
Here’s a question: if you asked ten randomly selected Less Wrong members: “What are the rules of Less Wrong?”—how many of them would give the correct answer? Not as a link to this or that comment, but in their own words (or even just by quoting a list of rules, minus the commentary)?
(What is the correct answer?)
How many of their answers would even match one another?
Yes, of course, but the way this works in real-world legal systems is that first there’s a law, and then there’s case law which establishes precedent for its application. (And, as you say, it hardly makes it easy to comply with the law. Perhaps I should retain an attorney to help me figure out what the rules of Less Wrong are? Do I need to have a compliance department…?) Real-world legal systems in well-functioning modern countries generally don’t take the approach of “we don’t have any written down laws; we’ll legislate by judgment calls in each case; even after doing that, we won’t encode those judgments into law; there will only be precedent and judicial opinion, and that will be the whole of the law”.[1]
Have there been societies in the past which have worked like this? I don’t know. Maybe we can ask David Friedman?
Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts?
That aside, I have questions about this rate limit:
Does it apply to all posts of any kind, written by anyone? More specifically:
Does it apply to both personal and frontpage posts?
Does it apply to posts written by moderators? Posts written about me (or specifically addressing me)? Posts written by moderators about me?
Does it apply to this post? (I assume that it must not, since you mention that you’d like me to make a case that so-and-so, you say “I am interested in what Said actually prefers here”, etc., but just want to confirm this)EDIT: See belowDoes it apply to “open thread” type posts (where the post itself is just a “container”, so to speak, and entirely different conversations may be happening under different top-level comments)?
Does it apply to my own posts? (That would be very strange, of course, but it wouldn’t be the strangest edge case that’s ever been left unhandled in a feature implementation, so seems worth checking…)
Does it apply retroactively to existing posts (including very old posts), or only new posts going forward?
Is there any way for a post author to disable this rate limit, or opt out of it?
Does the rate limit reset at a specific time each week, or is there simply a check for whether 3 posts have been written in the period starting one week before current time?
Is there any rate limit on editing comments, or only posting new ones? (It is presumably not the intent to have the rate limit triggered by fixing a typo, for instance…)
Is there a way for me to see the status of the rate limit prior to posting, or do I only find out whether the limit’s active when I try to post a comment and get an error?
Is there any UI cue to inform readers or other commenters (including a post’s author) that I can’t reply to a comment of theirs, e.g., due to the rate limit?
ETA: After attempting to post this comment last night, I received a message informing me that I would not be able to do so until some hours in the then-future. This answers the crossed-out question above, I suppose. Unfortunately, it also makes the asides about wanting to know what I think on this topic… well, somewhat farcical, quite frankly.
Aww christ I am very sorry about this. I had planned to ship the “posts can be manually overridden to ignore rate limiting” feature first thing this morning and apply it to this post, but I forgot that you’d still have made some comments less than a week ago which would block you for awhile. I agree that was a really terrible experience and I should have noticed it.
The feature is getting deployed now and will probably be live within a half hour.
For now, I’m manually applying the “ignore rate limit” flag to posts that seem relevant. (I’ll likely do a migration backfill on all posts by admins that are tagged “Site Meta”. I haven’t made a call yet about Open Threads)
I think some of your questions are answered in the previous comment:
I’ll write a more thorough response after we’ve finished deploying the “ignoreRateLimits flag for posts” PR.
Site Meta posts contains a lot more moderation, so not sure we should do that.
Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.
(While I recall you explicitly disclaiming such an obligation in some other recent comments… if you don’t think there is some kind of social norm about this, why did you previously use phrasing like “there is always such an obligation” and “Then they shouldn’t post on a discussion forum, should they? What is the point of posting here, if you’re not going to engage with commenters?”. Even if you think most of your comments don’t have the described effect, I think the linked comment straightforwardly implies a social norm. And I think the attitude in that comment shines through in many of your other comments)
I think my actual crux “somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don’t think they’ll be productive to engage with.”
I believe “Said’s commenting style actively pushes against this in a norm-enforcing-feeling way”, but, as noted in the post, I’m still kind of confused about that (and I’ll say explicitly here: I am still not sure I’ve named the exact problem). I said a whole lot of words about various problems and caveats and how they fit together and I don’t think you can simplify it down to “the problem is X”. I said at the end, a major crux is “Said can adhere to the spirit of ‘“don’t imply people have an obligation to engage with your comments’,” where “spirit” is doing some important work of indicating the problem is fuzzy.
We’ve given you a ton of feedback about this over 5-6 years. I’m happy to talk or answer questions for a couple more days if the questions look like they’re aimed at ‘actually figure out how to comply with the spirit of the request’, but not more discussion of ‘is there a problem here from the moderator’s perspective?’.
I understand (and respect) that you think the moderators are wrong in several deep ways here, and I do honestly think it’s good/better for you to stick around with a generator of thoughts and criticism that’s somewhat uncorrelated with the site admin judgment” (but not free-reign to rehash it out in subtle conflict in other people’s comment sections)
I’m open (in the longterm) to arguments about whether our entire moderation policy is flawed, but that’s outside the scope of this moderation decision and you should argue about that in top-level posts and/or in posts by Zack/etc if it’s important to you)[random note that is probably implied but I want to make explicit: “enforcing standards that the LW community hasn’t collectively opted into in other people’s threads” is also essentially the criticism I’d make of many past comments of Duncans, although he goes about it in a pretty different way]
Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?
For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!
Or, take the links. One of them is clearly meant to be an example of the thing you described (and which I quoted). The others… don’t seem to be.[2] Are they just examples of things where you disagree with me? Again, fine and well, but is “being (allegedly) wrong about some non-obvious philosophical point” a moderation-worthy offense…? How do these other links fit into a description of what problem you’re solving?
And, perhaps just as importantly… how does any of this fit into… well, anything that has happened recently? All of your links are to discussions that took place three years ago. What is the connection of any of that to recent events? Are you suggesting that I have recently written comments that would give people the impression that Less Wrong has a social norm that imputes on post authors an obligation to respond to comments on their posts?
I ask these things not because I want to persuade you that there isn’t a problem, per se (I think there are many problems but of course my opinion differs from yours about what they are)—but, rather, because I can hardly comply with the rules, either in letter or in spirit or in any other way, when I don’t know what the rules are. From my perspective, what I seem to see the mods doing is the equivalent of the police stopping a person who’s walking down the street, saying “we’re taking you in for speeding”, and, in response to the confused citizen’s protests, explaining that he got a speeding ticket three years ago, and now they’re arresting him for exceeding the speed limit. Is this a long-delayed punishment? Is there a more recent offense? Is there some other reason for the arrest? Or what?
I think that people should feel comfortable ignoring and/or downvoting anyone’s comments if they don’t think engagement will be productive! Certainly I should not be any sort of exception to this. (Why in the world would I be? Of course you should engage only if you have some expectation that engaging will be productive, and not otherwise!)
If I write a comment and you think it is a bad comment (useless, obviously wrong, etc.), by all means downvote and ignore. Why not? And if I write another comment that says “you have an obligation to reply!”—I wouldn’t say that, because I don’t think that, but let’s say that I did—downvote and ignore that comment, too! Do this no matter who the commenter is!
Anyhow, if the problem really is essentially as I’ve summarized it, plus or minus some nuances and elaborations, then:
I really don’t see what any recent event have to do with anything, or how the rate limit solves it, or… really, this entire situation perplexes me, from that perspective. But,
If the worry is that other Less Wrong participants might get the wrong idea about site norms from my comments, then let me assure you that my comments certainly shouldn’t be taken to imply that said norms are anything other than what the moderators say they are. If anyone gets any other impression from my comments, that can only be a misunderstanding. I solemnly promise that if anyone questions me on this point (i.e., asks whether I am claiming the existence of some norms which the moderators have disclaimed), I will, in response, clearly reaffirm this view. (I encourage anyone, moderators or otherwise, to link to this comment in answer to any commenters or authors who seem at all confused on this point.)
Is that… I mean, does that solve the problem…?
Actually, you somewhat misconstrue the comment, by taking it out of context. That’s perhaps not too important, but worth noting. In any case, it’s a comment I wrote three years ago, in the middle of a long discussion, and as part of a longer and offhandedly-written description, spread over a number of comments, of my view—and which, moreover, takes its phrasing directly from the comment it was a reply to. These are hardly ideal conditions for expressing nuances of meaning. My view is that, when writing comments like this in the middle of a long discussion, it is neither necessary nor desirable to agonize over whether the phrasing and formulation is ideal, because anyone who disagrees or misunderstands can just reply to indicate that, and the confusion or disagreement can be hammered out in the replies. (And this is largely what happened in the given case.[3])
In particular, I can’t help but note that you link to a sub-thread which begins with me saying “This comment is a tangent, and I haven’t decided yet if it’s relevant to my main points or just incidental—”, i.e., where I pretty clearly signal that engagement isn’t necessarily critical, as far as the main discussion goes.
Perhaps you missed it, but I did write a comment in that discussion where I very explicitly wrote that “I’m not saying that there’s a specific obligation for a post author to post a reply comment, using the Less Wrong forum software, directly to any given comment along the lines I describe”. Was that comment, despite my efforts, somehow unclear? That’s possible! These things happen. But is that a moderation-worth offense…?
The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don’t have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn’t helping and isn’t being asked of them.
If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it’s often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren’t aware what you did was a crime.
Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).
Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?
I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some falsely-claimed norm. But for me to enforce anything seems impossible as a purely technical matter.)
Indeed, if I had done this, then some censure would be warranted. (Now, personally, I would expect that such censure would start with a comment from a moderator, saying something like: “<name of my interlocutor>, to be clear, Said is wrong about what the site’s rules and norms are; there is no obligation to respond to commenters. Said, please refrain from misleading other users about this.” Then subsequent occurrences of comments which were similarly misleading might receive some more substantive punishment, etc. That’s just my own, though I think a fairly reasonable, view of how this sort of moderation challenge should be approached.)
But I think that, taking the totality of my comments in the linked thread, it is difficult to support the claim that I somehow made false claims about site rules or norms. It seems to me that I was fairly clearly talking about general principles—about epistemology, not community organization.
Now, perhaps you think that I did not, in fact, make my meaning clear enough? Well, as I’ve said, these things do happen. Certainly it seems to me like step one to rectify the problem, such as it is, would be just to make a clear ex cathedra statement about what the rules and norms actually are. That mitigates any supposed damage. (Was this done? I don’t recall that it was. But perhaps I missed it.) Then there can be talk of punishment.[1]
But, of course, there already was a moderation warning issued for the incident in question. Which brings us back to the question of what it has to do with the current situation (and to my “arrest for a speeding ticket issued three years ago” analogy).
P.S.:
To be maximally clear: I neither believed nor (as far as I can recall) claimed this.
Although it seems to me that to speak in terms of “punishment”, when the offense (even taking as given that the offense took place at all) is something so essentially innocent as accidentally mis-characterizing an informal community norm, is, quite frankly, bizarrely harsh. I don’t think that I’ve ever participated in any other forum with such a stringent approach to moderation.
For a quick answer connecting the dots between “What does the recent Duncan/Said conflict have to do with Said’s past behavior,” I think your behavior in the various you/Duncan threads was bad in basically the same way we gave you a mod warning about 5 years ago, and also similar to a preliminary warning we gave you 6 years ago (in intercom, which ended in us deciding to take no action ath the time)
(i.e. some flavor of aggressiveness/insultingness, along with demanding more work from others than you were bringing yourself).
As I said, I cut you some slack for it because of some patterns Duncan brought to the table, but not that much slack.
The previous mod warning said “we’d ban you for a month if you did it again”, I don’t really feel great about that since over the past 5 years there’s been various comments that flirted with the same behavior and the cost of evaluating it each time is pretty high.
I will think on whether this changes anything for me. I do think it’s helpful, offhand I don’t feel that it completely (or obviously more than 50%) solves the problem, but, I do appreciate it and will think on it.
I wonder if you find this comment by Benquo (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?
Yeah I do find that comment/concept important. I think I basically already counting that class of thing in the list of positive things I’d mentioned elsethread, but yes, I am grateful to you for that. (Benquo being one to say it in that context is a bit more evidence of it’s weight which I had missed before, but I do think I was already weighting the concept approximately the right amount for the right reasons. Partly from having already generally updated on some parts of the Benquo worldview)
Please note, my point in linking that comment wasn’t to suggest that the things Benquo wrote are necessarily true and that the purported truth of those assertions, in itself, bears on the current situation. (Certainly I do agree with what he wrote—but then, I would, wouldn’t I?)
Rather, I was making a meta-level point. Namely: your thesis is that there is some behavior on my part which is bad, and that what makes it bad is that it makes post authors feel… bad in some way (“attacked”? “annoyed”? “discouraged”? I couldn’t say what the right adjective is, here), and that as a consequence, they stop posting on Less Wrong. And as the primary example of this purported bad behavior, you linked the discussion in the comments of the “Zetetic Explanation” post by Benquo (which resulted in the mod warning you noted).
But the comment which I linked has Benquo writing, mere months afterward, that the sort of critique/objection/commentary which I write (including the sort which I wrote in response to his aforesaid post) is “helpful and important”, “very important to the success of an epistemic community”, etc. (Which, I must note, is tremendously to Benquo’s credit. I have the greatest respect for anyone who can view, and treat, their sometime critics in such a fair-minded way.)
This seems like very much the opposite of leaving Less Wrong as a result of my commenting style.
It seems to me that when the prime example you provide of my participation in discussions on Less Wrong purportedly being the sort of thing that drive authors away, actually turns out to be an example of exactly the opposite—of an author (whose post I criticized, in somewhat harsh terms) fairly soon (months) thereafter saying that my critical comments are good and important to the community and that I should continue…
… well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious doubt on your thesis. Wouldn’t you agree?
(And this, note, is an author who has written many posts, many of them quite highly upvoted, and whose writings I have often seen cited in all sorts of significant discussions, i.e., one who has contributed substantially to Less Wrong.)
The reason it’s not additional evidence to me is that I, too, find value in the comments you write for the reasons Benquo states, despite also finding them annoying at the time. So, Benquo’s response here seems like an additional instance of my viewpoint here, rather than a counterexample. (though I’m not claiming Benquo agrees with me on everything on this domain)
Said is asking Ray, not me, but I strongly disagree.
Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.)
Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.)
Point 3 is that benquo’s view on even that specific comment is not the only author-view that matters; benquo eventually being like “this critical feedback was great” does not mean that other authors watching the interaction at the time did not feel “ugh, I sure don’t want to write a post and have to deal with comments like this one.” (Said knows this, I think.)
(Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we’re updating on benquo’s endorsements then it comes out to “both sets of norms useful,” presumably for different things.)
I’d say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like “yeah, fair, this did not turn out to be the best example,” not “oh snap, you’re right, turns out it was all a house of cards.”
(This will be my only comment in this chain, so as to avoid repeating past cycles.)
A black raven is, indeed, not strong evidence against white ravens. But that’s not quite the right analogy. The more accurate analogy would go somewhat like this:
Alice: White ravens exist!
Bob: Yeah? For real? Where, can I see?
Alice (looking around and then pointing): Right… there! That one!
Bob (peering at the bird in question): But… that raven is actually black? Like, it’s definitely black and not white at all.
Now not only is Bob (once again, as he was at the start) in the position of having exactly zero examples of white ravens (Alice’s one purported example having been revealed to be not an example at all), but—and perhaps even more importantly!—Bob has reason to doubt not only Alice’s possession of any examples of her claim (of white ravens existing), but her very ability to correctly perceive what color any given raven is.
Now if Alice says “Well, I’ve seen a lot of white ravens, though”, Bob might quite reasonably reply: “Have you, though? Really? Because you just said that that raven was white, and it is definitely, totally black.” What’s more, not only Bob but also Alice herself ought rightly to significantly downgrade her confidence in her belief in white ravens (by a degree commensurate with how big a role her own supposed observations of white ravens have played in forming that belief).
Just so. But, once again, we must make our analysis more specific and more precise in order for it to be useful. There are two points to make in response to this.
First is what I said above: the point is not just that the commenting style/approach in question is valuable to some authors (although even that, by itself, is surely important!), but that it turns out to be valuable specifically to the author who served as an—indeed, as the—example of said commenting style/approach being bad. This calls into question not just the thesis that said approach is bad in general, but also the weight of any purported evidence of the approach’s badness, which comes from the same source as the now-controverted claim that it was bad for that specific author.
Second is that not all authors are equal.
Suppose, for example, that dozens of well-respected and highly valued authors all turned out to condemn my commenting style and my contributions, while those who showed up to defend me were all cranks, trolls, and troublemakers. It would still be true, then, to say that “my comments are valuable to some authors but displease others”, but of course the views of the “some” would be, in any reasonable weighting, vastly and overwhelmingly outweighed by the views of the “others”.
But that, of course, is clearly not what’s happening. And the fact that Benquo is certainly not some crank or troll or troublemaker, but a justly respected and valued contributor, is therefore quite relevant.
First, for clarity, let me note that we are not talking (and Benquo was not talking) about a single specific comment, but many comments—indeed, an entire approach to commenting and forum participation. But that is a detail.
It’s true that Benquo’s own views on the matter aren’t the only relevant ones. But they surely are the most relevant. (Indeed, it’s hard to see how one could claim otherwise.)
And as far as “audience reactions” (so to speak) go, it seems to me that what’s good for the goose is good for the gander. Indeed, some authors (or potential authors) reading the interaction might have had the reaction you describe. But others could have had the opposite reaction. (And, judging by the comments in that discussion thread—as well as many other comments over the years—others in fact did have the opposite reaction, when reading that discussion and numerous others in which I’ve taken part.) What’s more, it is even possible (and, I think, not at all implausible) that some authors read Benquo’s months-later comment and thought “you know, he’s right”.
Well, as I said in the grandparent comment, updating on Benquo’s endorsement is exactly what I was not suggesting that we do. (Not that I am suggesting the opposite—not updating on his endorsement—either. I am only saying that this was not my intended meaning.)
Still, I don’t think that what you say about “both sets of norms useful” is implausible. (I do not, after all, take exception to all of your preferred norms—quite the contrary! Most of them are good. And an argument can be made that even the ones to which I object have their place. Such an argument would have to actually be made, and convincingly, for me to believe it—but that it could be made, seems to me not to be entirely out of the question.)
Well, as I’ve written, to the extent that the convincingness of an argument for some claim rests on examples (especially if it’s just one example), the purported example(s) turning out to be no such thing does, indeed, undermine the whole argument. (Especially—as I note above—insofar as that outcome also casts doubt on whatever process resulted in us believing that raven to have been white in the first place.)
Answering some other questions:
By default, the rate limit applies to all posts, unless we’ve made an exception for it. There are two exceptions to it:
1. I just shipped the “ignore rate limits” flag on posts, which authors or admins can set so that a given post allows rate-limited comments to comment without restriction.
2. I haven’t shipped yet, but expect within the next day to ship “rate-limited authors can comment on their own posts without restriction.” (for the immediate future this just applies to authors, I expect to ship something that makes it work for coauthors)
In general, we are starting by rolling out the simplest versions of the rate-limiting feature (which is being used on many users, not just you), and solving problems as we notice them. I acknowledge this makes for some bad experiences along the way. I think I stand by that decision because I’m not even sure rate limits will turn out to work as a moderator tool, and investing like 3 months of upfront work ironing out the bugs first doesn’t seem like the right call.
For the general question of “whether a given such-and-such post will be rate limited”, the answer will route through “will individual authors choose to do set “ignoreRateLimit”, and/or will site admins choose to do it?”.
Ruby and I have some disagreements on how important it is to set the flag on moderation posts. I personally think it makes sense to be extra cautious about limiting people’s ability to speak in discussions that will impact their future ability to speak, since those can snowball and I think people are rightly wary of that. There are some other tradeoffs important to @Ruby, which I guess he can elaborate on if he wants.
For now, I’m toggling on the ignoreRateLimits flag on most of my own moderation posts (I’ve currently done so for LW Team is adjusting moderation policy and “Rate limiting” as a mod tool)
Other random questions:
Re: Open threads – I haven’t made a call yet, but I’m leaving the flag disabled/rate-limited-normally for now.
There is no limit to rate-limited-people editing their own comments. We might revisit it if it’s a problem but my current guess is rate-limitees editing their comments is pretty fine.
The check happens based on the timestamp of your last comment (it works via fetching comments within the time window and seeing if there are more than the allotted amount)
On LessWrong.com (but presumably not greaterwrong, atm) it should inform you that you’re not able to comment before you get started.
On LessWrong.com, it will probably (later, but not yet, not sure whether we’ll get to it this week), show an indicator that a commenter has been rate limited. (It’s fairly easy to do this when you open a comment-box to reply to them, there are some performance concerns for checking-to-display it on
I plan to add a list of rate-limited users to lesswrong.com/moderation. I think there’s a decent chance that goes live within a day or so.
A lot of this is that the set of “all moderation posts” covers a wide range of topics and the potential set “all rate limited users” might include a wide diversity of users, making me reluctant to commit upfront to not rate limits apply blanketly across the board on moderation posts.
The concern about excluding people from conversations that affect whether they get to speak is a valid consideration, but I think there are others too. Chiefly, people are likely rate limited primarily because they get in the way of productive conversation, and in so far as I care about moderation conversations going well, I might want to continue to exclude rate limited users there.
Note that there are ways, albeit with friction, for people to get to weigh in on moderation questions freely. If it seemed necessary, I’d be down with creating special un-rate-limited side-posts for moderation posts.
I am realizing that what seems reasonable here will depend on your conception of rate limits. A couple of conceptions you might have:
You’re currently not producing stuff that meets the bar for LessWrong, but you’re writing a lot, so we’ll rate limit you as a warning with teeth to up your quality.
We would have / are close to banning you, however we think rate limits might serve either as
a sufficient disincentive against the actions we dislike
a restriction that simply stops you getting into unproductive things, e.g. Demon Threads
Regarding 2., a banned user wouldn’t get to participate in moderation discussions either, so under that frame, it’s not clear rate limited users should get to. I guess it really depends if it was more of a warning / light rate ban or something more severe, close to an actual ban.
I can say more here, not exactly a complete thought. Will do so if people are interested.
I just shipped the “ignore rate limit” flag for posts, and removed the rate limit for this post. All users can set the flag on individual posts.
Currently they have to set it for each individual post. I think it’s moderately likely we’ll make it such that users can set it as a default setting, although I haven’t talked it through with other team members yet so can’t make an entirely confident statement on it. We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)
I’m working on a longer response to the other questions.
I could be misunderstanding all sorts of things about this feature that you’ve just implemented, but…
Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users’ posts? Shouldn’t I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?
100+ karma means something like you’ve been vetted for some degree of investment in the site and enculturation, reducing the likelihood you’ll do something with poor judgment and ill intention. I might worry about new users creating posts that ignore rate limits, then attracting all the rate-limited new users who were not having good effects on the site to come comment there (haven’t thought about it hard, but it’s the kind of thing we consider).
The important thing is that the way the site currently works, any behavior on the site is likely to affect other parts of the site, such that to ensure the site is a well-kept garden, the site admins do have to consider which users should get which privileges.
(There are similarly restrictions on which users can be users from which posts.)
I expect Ray will respond more. My guess is you not being able to comment on this specific post is unintentional and it does indeed seem good to have a place where you can write more of a response to the moderation stuff.
The other details will likely be figured out as the feature gets used. My guess is how things behave are kind of random until we spend more time figuring out the details. My sense was that the feature was kind of thrown together and is now being iterated on more.
The discussion under this post is an excellent example of the way that a 3-per-week per-post comment limit makes any kind of useful discussion effectively impossible.
I continue to be disgusted with this arbitrary moderator harrassment of a long-time, well-regarded user, apparently on the pretext that some people don’t like his writing style.
Achmiz is not a spammer or a troll, and has made many highly-upvoted contributions. If someone doesn’t like Achmiz’s comments, they’re free to downvote (just as I am free to upvote). If someone doesn’t want to receive comments from Achmiz, they’re free to use already-existing site functionality to block him from commenting on their own posts. If someone doesn’t like his three-year-old views about an author’s responsibility or lack thereof to reply to criticisms, they’re free to downvote or offer counterarguments. Why isn’t that the end of the matter?
Elsewhere, Raymond Arnold complains that Achmiz isn’t “corrigible about actually integrating the spirit-of-our-models into his commenting style”. Arnold also proposes that awareness of frame control—a concept that Achmiz has criticized—become something one is “obligated to learn, as a good LW citizen”. I find this attitude shockingly anti-intellectual. Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.)
My first comment on Overcoming Bias was on 15 December 2007. I was at the first Overcoming Bias meetup on 21 February 2008. Back then, there was no conept of being a “good citizen” of Overcoming Bias. It was a blog. People read the blog, and left comments when they had something to say, speaking in their own voice, accountable to no authority but their own perception of reality, with no obligation to be corrigible to the spirit of someone else’s models. Achmiz’s first comment on Less Wrong was in May 2010.
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
Perhaps it will be replied that no one is being silenced—this is just a mere rate-limit, not any kind of persecution or restriction on speech. I don’t think Oliver Habryka is naïve enough to believe that. Citizenship—first-class citizenship—is a Schelling point. When someone tries to take that away from you, it would be foolish to believe that they don’t intend you any further harm.
I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
Sure, but… I think I don’t know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
LessWrong totally has prerequisites. I don’t think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven’t really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it’s doing something pretty similar to the outraged calls for “censorship” that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWrong has always and will continue to be driven by inside-view models of the moderators about what makes a good discussion forum, and this seems quite important.
I don’t know, I guess your whole comment feels really quite centrally like the kind of thing that Eliezer explicitly warns against in Well-Kept Gardens Die by Pacifism, so let me just reply to quotes from you with quotes from Eliezer:
Eliezer:
You:
Eliezer:
Again, this is all just on a very rough reading of your comment, and I might be misunderstanding you.
My current read here is that your objection is really a very standard “how dare the moderators moderate LessWrong” objection, when like, I do really think we have the mandate to moderate LessWrong how we see fit, and indeed maybe the primary reason why LessWrong is not as dead as basically every other forum of its age and popularity is because it had the seed of “Well-Kept Gardens Die by Pacifism” in it. The understanding that yes, of course the moderators will follow their inside view and make guesses at what is best for the site without trying to be maximally justifiable, and without getting caught in spirals of self-doubt of whether they have the mandate to do X or Y or Z.
But again, I don’t think I super understood what specific question you were asking me, so I might have totally talked past you.
I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don’t intend to make a “how dare the moderators moderate Less Wrong” objection. Rather, the objection is, “How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma.” (That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.) I’m saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don’t want to accept literally any speech (which is why the grandparent mentions “removing low-quality [...] comments” as a legitimate moderator duty).
Note that “permanently restrict the account of” is different from “moderate”. For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I’m accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz.
Regarding Yudkowsky’s essay “Well-Kept Gardens Die By Pacifism”, please note that the end of the essay points out that a forum with a karma system is different from a forum (such as a mailing list) in which moderators are the only attention-allocation mechanism, and urges users not to excessively question themselves when considering downvoting. I agree with this! That’s why the grandparent emphasizes that users who don’t like Achmiz’s comments are free to downvote them. The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I fear that Yudkowsky might have been right when he claimed that “[a]ny community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.” I sincerely hope Less Wrong is worth saving.
Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
Cool, makes sense. Also happy to chat in-person sometime if you want.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)
I endorse much of Oliver’s replies, and I’m mostly burnt out from this convo at the moment so can’t do the followthrough here I’d ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said’s reference class is very high, and I’d treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don’t think the Spirit of LessWrong 2009 actually supports you on the specific claims you’re making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer’s buy-in, and now we have 6 years of track of record that I think most people agree is much better than nobody in charge.
But, honestly, I don’t actually think you really believe these meta-level arguments (or, at least won’t upon reflection and maybe a week of distance). I think you disagree with our object level call on Said, and on the overall moderation philosophy that led to it. And, like, I do think there’s a lot to legitimately argue over with the object level call on Said and overall moderation philosophy surrounding it. I’m fairly burnt out from taking about this in the immediate future but fwiw I welcome top-level posts arguing about this and expect to engage with them in the future.
And if you decide to quit LessWrong in protest, well, I will be sad about that. I think your writing and generator are quite valuable. I do think there’s an important spirit of early LessWrong that you keep alive, and I’ve made important updates due to your contributions. But, also, man it doesn’t look like your relationship with the site is necessarily that healthy for you.
...
I think a lot of what you’re upset about is an overall sense that your home doesn’t feel like you’re home anymore. I do think there is a legitimately sad thing worth grieving there.
But I think old LessWrong did, actually, die. And, if it hadn’t, well, it’s been 12 years and the world has changed. I think it wouldn’t make sense, by the Spirit of 2009 LessWrong’s lights, to stay exactly the way you remember it. I think some of this is due to specific philosophies the LessWrong 2.0 team brings (I think our original stated goal of “cause intellectual progress to happen faster/better” is very related to and driven by the original sequences, but I think our frame is subtly different). But meanwhile a lot of it is just about the world changing, and Eliezer moving on in some ways (early LessWrong’s spirit was AFAICT largely driven by Eliezer posting frequently, while braindumping a specific set of ideas he had to share. That process is now over and any subsequent process was going to be different somehow)
I don’t know that I really have a useful takeaway. Sometimes there isn’t one. But insofar as you think it is healthy for you to stay on LessWrong and you don’t want to quit in protest of the mod call on Said, fwiw I continue to welcome posts arguing for what you think the spirit of lesswrong should be, and/or where you think the mod team is fucking up.
(As previously stated, I’m fairly burnt out atm, but would be happy to talk more about this sometime in the future if it seemed helpful)
Not to respond to everything you’ve said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site.
Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you’re saying, I don’t think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen.
(I think in fact how it’s actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he’d have been banned long ago.)
Correct me though if there’s a deeper argument here that I’m not seeing.
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the mods’ ideals.
(This was definitely my opinion of the state of moderation over at DSL, for example, until a few months ago. The former problem has, happily, been solved; the latter, unhappily, remains. Less Wrong likewise seems to be well on its way toward solving the former problem; I would not have thought the latter to obtain… but now my opinion, unsurprisingly, has shifted.)
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?
In the comment Zack cites, Raemon said the same when raising the idea of making it a prerequisite:
Also for everyone’s awareness, I have since wrote up Tabooing “Frame Control” (which I’d hope would be like part 1 of 2 posts on the topic), but the reception of the post, i.e. 60ish karma, didn’t seem like everyone was like “okay yeah this concept is great”, and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.
Yep! As far as I remember the thread Ray said something akin to “it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this”, but I don’t fully remember.
Aella’s post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don’t think a concept should reach canonicity just on the basis of that post, given its specific flaws).
Arnold says he is thinking about maybe proposing that, in future, after he has done the work to justify it and paying attention to how people react to it.
(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they’re simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn’t entered my conscious awareness for a while, until I read this post today.)
(Assuming there’s not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to “hide” commenters or individual comments, which means that those comments are collapsed by default (and marked as “hidden by the post author”) but can be individually expanded, and each user can set an option to always expand those comments for themselves.
Maybe a middle ground would be to give authors a double-strong downvote power for comments on their posts. A comment with low enough karma is already hidden by default, and repeated strong downvotes without further response would tend chill rather than inflame the ensuing discussion, or at least push the bulk of it away from the author’s arena, without silencing critics completely.
I think a problem that my proposal tries to solve, and this one doesn’t, is that some authors seem easily triggered by some commenters, and apparently would prefer not to see their comments at all. (Personally if I was running a discussion site I might not try so hard to accommodate such authors, but apparently they include some authors that the LW team really wants to keep or attract.)
To me it seems unlikely that there’d be enough banning to prevent criticism from surfacing. Skimming through https://www.lesswrong.com/moderation, the amount of bans seems to be pretty small. And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.
This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.
Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I’ve rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Facebook, and having an unpleasant experience (but didn’t get banned), then deciding to avoid interacting with him in the future. However I can’t find the actual interaction on FB so I’m not 100% sure this happened. FB has terrible search which probably explains it, but maybe I hallucinated this, or confused him with someone else, or did it with a pseudonym.
Hm, interesting points.
My impression is that there are some domains for which this is true, but those are the exception rather than the rule. However, this impression is just based off of, err, vaguely querying my brain? I’m not super confident in it. And your claim is one that I think is “important if true”. So then, it does seem worth an investigation. Maybe enumerating through different domains and asking “Is it true here? Is it true here?”.
One thing I’d like to point out is that, being a community, something very similar is happening. Only a certain type of person comes to LessWrong (this is true of all communities to some extent; they attract a subset of people). It’s not that “outsiders” are explicitly banned, they just don’t join and don’t thus don’t comment. So then, effectively, ideas presented here currently aren’t available to “outsiders” for critiques.
I think there is a trade off at play: the more you make ideas available to “outsiders” the lower the chance something gets overlooked, but it also has the downside of some sort of friction.
(Sorry if this doesn’t make sense. I feel like I didn’t articulate it very well but couldn’t easily think of a better way to say it.)
Good point. I think that’s true and something to factor in.
While the current number of bans is pretty small, I think this is in part because lots of users don’t know about the option to ban people from their posts. (See here, for example.)
That makes sense. Still, even if it were more well known, I wouldn’t expect the number of bans to reach the point where it is causing real problems with respect to criticism surfacing.
One solution is to limit the number of banned users to a small fraction of overall commentors. I’ve written 297 posts so far and have banned only 3 users from commenting on them. (I did not ban Duncan or Said.)
My highest-quality criticism comes from users who I have never even considered banning. Their comments are consistently well-reasoned and factually correct.
What exactly does “nobody wanted to criticize it” signal that you don’t get from high/low karma votes?
Some UI thoughts as I think about this:
Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can’t actually tell when something is controversial.
One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like “something is WRONG!” which didn’t feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.
But, an idea I just had was that maybe we should expose that sort of information
once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it’s harder to infer individual voters.Tagging @jp who might be interested.
I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn’t address the bulk of my concerns, which I’ve laid out in more detail in this comment. In connection with karma, I’ve observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.
We’ve been thinking about this for the EA Forum. I endorse Raemon’s thoughts here, I think, but I know I can’t pass the ITT of a more transparent side here.
First, my read of both Said and Duncan is that they appreciate attention to the object level in conflicts like this. If what’s at stake for them is a fact of the matter, shouldn’t that fact get settled before considering other issues? So I will begin with that. What follows is my interpretation (mentioned here so I can avoid saying “according to me” each sentence).
In this comment, Said describes as bad “various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on”, without specifically identifying Duncan as proposing that norm (tho I think it’s heavily implied).
Then gjm objects to that characterization as a straw man.
In this comment Said defends it, pointing out that Duncan’s standard of “critics should do some of the work of crossing the gap” is implicitly a rule against “asking people for examples of their claims [without anything else]”, given that Duncan thinks asking for examples doesn’t count as doing the work of crossing the gap. (Earlier in the conversation Duncan calls it 0% of the work.) I think the point as I have written it here is correct and uncontroversial; I think there is an important difference between the point as I wrote it and the point as Said wrote it.
In the response I would have wanted to see, Duncan would have clearly and correctly pointed to that difference. He is in favor of people asking for examples [combined with other efforts to cross the gap], does it himself, gives examples himself, and so on. The unsaid
[without anything else]
part is load-bearing and thus inappropriate to leave out or merely hint at. [Or, alternatively, using “ask people for examples” to refer to comments that do only that, as opposed to the conversational move which can be included or not in a comment with other moves.]Instead we got this comment, where Duncan interprets Said’s claim narrowly, disagrees, and accuses Said of either lying or being bad at reading comprehension. (This does not count as two hypotheses in my culture.)
Said provides four examples; Duncan finds them unconvincing and calls using them as citations a blatant falsehood. Said leaves it up to the readers to adjudicate here. I do think this was a missed opportunity for Said to see the gap between what he stated and what I think he intended to state.
From my perspective, my reading of Said’s accusation is not clearly suggested in the comment gjm objected to, is obviously suggested from the comment Duncan responds to, with the second paragraph[1] doing most of the work, and then further pointed at by later comments. If Said ate breakfasts of only cereal, and Duncan said that was unhealthy and he shouldn’t do it, it is not quite right to say Duncan ‘thinks you shouldn’t eat cereal’, as he might be in favor of cereal as part of a balanced breakfast; but also it is not quite right for Duncan to ignore Said’s point that one of the main issues under contention is whether Said can eat cereal by itself (i.e. asking for examples without putting in interpretative labor). This looks like white horses are not horses.
So what about Said’s four examples? As one might expect, all four are evidence for my interpretation, and none of the four are evidence for Duncan’s interpretation. I would not call this a blatant falsehood,[2] and think all four of Duncan’s example-by-example responses are weak. Do we treat the examples as merely ‘evidence for the claim’, or also as ‘identification of the claim’?
So then we have to step back and consider non-object-level considerations, of which I see a few:
I think this situation is, on some level, pretty symmetric.
I think the features of Said’s commenting style that people (not just Duncan!) find annoying are things that Said is deliberately optimizing for or the results of principled commitments he’s made, so it’s not just a simple bug that can be fixed.
I think the features of Duncan’s conflict resolution methods that people find offputting are similarly things that Duncan is deliberately optimizing for or the results of principled commitments he’s made, so it’s not just a simple bug that can be fixed.
I think both Said and Duncan a) contribute great stuff to the site and b) make some people like posting on LW less and it’s unclear what to do about that balance. This is one of the things that’s nice about clear rules that people are either following or not—it makes it easier for everyone to tell whether something is ‘allowed’ or ‘not allowed’, ‘fine’ or ‘not fine’, and so on, rather than making complicated judgments of whether or not you want someone around. I think the mod team does want to exercise some judgment and discernment beyond just rule-following, however.
How bad is it to state something that’s incorrect because it is too broad and then narrow it afterwards? Duncan has written about this in Ruling Out Everything Else, and I think Said did an adequate but not excellent job.
What’s the broader context of this discussion? Said has a commenting style that Duncan strongly dislikes, and Duncan seems to be in the midst of an escalating series of comments and posts pointing towards “the mods should ban Said”. My reckless speculation is that this comment looked to Duncan like the smoking gun that he could use to prove Said’s bad faith, and he tried to prosecute it accordingly. (Outside of context, I would be surprised by my reading not being raised to Duncan’s attention; in context, it seems obvious why he would not want (consciously or subconsciously) to raise that hypothesis.) My explanation is that Said’s picture of good faith is different than Duncan’s (and, as far as I can tell, both fit within the big tent of ‘rationality’).
Incidentally, I should note that I view Duncan’s escalation as something of a bet, where if the mods had clearly agreed with Duncan, that probably would have been grounds for banning Said. If the mods clearly disagree with Duncan, then what does ‘losing the bet’ look like? What was staked here?
The legal system sees a distinction between ‘false testimony’ (being wrong under oath) and ‘perjury’ (deliberately being wrong under oath), and it seems like a lot of this case hinges on “was Said deliberately wrong, or accidentally wrong?” and “was Duncan deliberately wrong, or accidentally wrong?”.
I also don’t expect it to be uncontroversial “who started it”. Locally, my sense is Duncan started it, and yet when I inhabit Duncan’s perspective, this is all a response to Said and his style. I interpret a lot of Duncan’s complaints here thru the lens of imaginary injury that he writes about here.
I think also there’s something going on where Duncan is attempting to mimic Said’s style when interacting with Said, but in a way that wouldn’t pass Said’s ITT. Suppose my comment here had simply been a list of ways that Duncan behaved poorly in this exchange; then I think Duncan could take the approach of “well, but Said does the same thing in places A, B, and C!”. I think he overestimates how convincing I would find that, and Duncan did a number of things in this exchange that my model of Said would not do and has not done (according to my interpretation, but not my model of Duncan’s, in a mirror of the four examples above).
I think Said is trying to figure out which atomic actions are permissible or impermissible (in part because it is easier to do local validity checking on atomic actions), and Duncan is trying to suggest what is permissible or impermissible is more relational and deals with people’s attitudes towards each other (as suggested by gjm here). I feel sympathetic to both views here; I think Duncan often overestimates how familiar readers will be with his works / how much context he can assume, and yet also I think Said is undercounting how much people’s memory of past interactions colors their experience of comments. [Again, I think these are not simple bugs but deliberate choices—I think Duncan wants to build up a context in which people can hold each other accountable and build further work together, and I think Said views colorblindness of this sort as superior to being biased.]
I note that my reasons for this are themselves perhaps
white horses are not horses
reasons, where I think Said’s original statement and follow-up are both imprecise, but they’re missing the additional features that would make them ‘blatant falsehood’s, while both imprecise statements and blatant falsehoods are ‘incorrect’.Vaniver privately suggested to me that I may want to offer some commentary on what I could’ve done in this situation in order for it to have gone better, which I thought was a good and reasonable suggestion. I’ll do that in this comment, using Vaniver’s summary of the situation as a springboard of sorts.
So, first of all, yes, I was clearly referring to Duncan. (I didn’t expect that to be obscure to anyone who’d bother to read that subthread in the first place, and indeed—so far as I can tell—it was not. If anyone had been confused, they would presumably have asked “what do you mean?”, and then I’d have linked what I mean—which is pretty close to what happened anyway. This part, in any case, is not the problem.)
The obvious problem here is that “don’t ask people for examples of their claims”—taken literally—is, indeed, a strawman.
The question is, whose problem (to solve) is it?
There are a few possible responses to this (which are not mutually exclusive).
On the one hand, if I want people to know what I mean, and instead of saying what I mean, I say something which is only approximately what I mean, and people assume that I meant what I said, and respond to it—well, whose fault is that, but mine?
Certainly one could make protestations along the lines of “haven’t you people ever heard of [ hyperbole / colloquialisms / writing off the cuff and expecting that readers will infer from surrounding context / whatever ]”, but such things are always suspect. (And even if one insists that there’s nothing un-virtuous about any particular instance of any one of those rhetorical or conversational patterns, nevertheless it would be a bit rich to get huffy about people taking words literally on Less Wrong, of all places.)
So, in one sense, the whole problem would’ve been avoided if I’d taken pains to write as precisely as I usually try to do. Since I didn’t do that, and could have, the fault would seem to be mine; case closed.
But that account doesn’t quite work.
For one thing, if someone says something you think is wrong, and you say “seems wrong to me actually”, and they reply “actually I meant this other thing”—well, that seems to me to be a normal and reasonable sort of exchange; this is how understanding is reached. I made a claim; gjm responded that it seemed like a strawman; I responded with a clarification.
Note that here I definitely made a mistake; what I should’ve included in that comment, but left out, was a clear and unambiguous statement along the lines of:
“Yes, taken literally, ‘don’t ask people for examples of their claims’ would of course be a strawman. I thought that the intended reading would be clear, but I definitely see the potential for literal (mis-)reading, sorry. To clarify:”
The rest of that comment would then have proceeded as written. I don’t think that it much needs amendment. In particular, the second paragraph (which, as Vaniver notes, does much of the work) gives a concise and clear statement of the claim which I was originally (and, at first, sloppily) alluding to. I stand by that clarified claim, and have seen nothing that would dissuade me from it.
Importantly, however, we can see that Duncan objects, quite strenuously, even to this clarified and narrowed form of what I said!
(As I note in this comment, it was not until after essentially the whole discussion had already taken place that Duncan edited his reply to my latter comment to explicitly disclaim the view that I ascribed to him. For the duration of that whole long comment exchange, it very much seemed to me that Duncan was not objecting because I was ascribing to him a belief he does not hold, but rather because he had not said outright that he held such a belief… but, of course, I never claimed that he had!)
So even if that clarified comment had come first (having not, therefore, needed any acknowledgment of previous sloppiness), there seems to be little reason to believe that Duncan would not have taken umbrage at it.
Despite that, failing to include that explicit acknowledgement was an error. Regardless of whether it can be said to be responsible for the ensuing heated back-and-forth (I lean toward “probably not”), this omission was very much a failure of “local validity” on my part, and for that there is no one to blame but me.
Of the rest of the discussion thread, there is little that needs to be said. (As Vaniver notes, some of my subsequent comments both clarify my claims further and also provide evidence for them.)
I agree that the hypothetical comment you describe as better is in fact better. I think something like … twenty-or-so exchanges with Said ago, I would have written that comment? I don’t quite know how to weigh up [the comment I actually wrote is worse on these axes of prosocial cooperation and revealing cruxes and productively clarifying disagreement and so forth] with [having a justified true belief that putting forth that effort with Said in particular is just rewarded with more branches being created].
(e.g. there was that one time recently where Said said I’d blocked people due to disagreeing with me/criticizing me, and I said no, I haven’t blocked anybody for disagreeing/criticizing, and he responded “I didn’t say anything about ‘blocked for disagreeing [or criticizing]’. (Go ahead, check!)” and the actual thing he’d said was that they’d been blocked due to disagreeing/criticizing; that’s the level of … gumming up the works? gish-gallop? … that I’ve viscerally come to expect.)
Like, I think there’s plausibly a CEV-ish code of conduct in which I “should”, at that point, still have put forth the effort, but I think it’s also plausible that the correct code of conduct is one in which doing so is a genuine mistake and … noticing that there’s a hypothetical “better” comment is not the same as there being an implication that I should’ve written it?
Something something, how many turns of the cheek are actually correct, especially given that, the week prior, multiple commenters had been unable, with evidence+argument+personal testimony, to shift Said away from a strikingly uncharitable prior.
Mine either, to be clear; I felt by that point that Said had willingly put himself outside of the set of [signatories to the peace treaty], turning down many successive opportunities to remain in compliance with it. I was treating his statements closer to the way I think it is correct to treat the statements of the literal Donald Trump than the way I think it is correct to treat the statements of an undistinguished random Republican.
(I can go into the reasoning for that in more detail, but it seems sort of conflicty to do so unprompted.)
I’m a little lost in this analogy; this is sort of where the privileging-the-hypothesis complaint comes in.
The conversation had, in other places, centered on the question of whether Said can eat cereal by itself; Logan for instance highlighted Said’s claim in a reply on FB:
There, the larger question of “can you eat only cereal, or must you eat other things in balance?” is front-and-center.
But at that point in the subthread, it was not front-and-center; yes, it was relevant context, but the specific claim being made by Said was clear, and discrete, and not at all dependent-on or changed-by that context.
The history of that chain:
Said includes, in a long comment “In summary, I think that what’s been described as ‘aiming for convergence on truth’ is some mixture of” … “contentless” … “good but basically unrelated to the rest of it” … “bad (various proposed norms of interaction such as ‘don’t ask people for examples of their claims’ and so on)”
gjm, in another long comment, includes “I don’t know where you get ‘don’t ask people for examples of their claims’ from and it sounds like a straw man” and goes on to elaborate “I think the things Duncan has actually said are more like ‘Said engages in unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself’, and wherever that lands on a scale from ’100% truth’ to ‘100% bullshit’ it is not helpful to pretend that he said ‘it is bad to ask people for examples of their claims’.
There’s a bunch of other stuff going on in their back and forth, but that particular thread has been isolated and directly addressed, in other words. gjm specifically noted the separation between the major issue of whether balance is required, and this other, narrower claim.
Said replied:
Which, yes, I straightforwardly agree with the if-then statement; if “asking people for examples of their claims” didn’t fit my stated criteria for what constitutes acceptable engagement or criticism, then it would be correct to describe me as advocating for a norm of “don’t ask people for examples of their claims.”
But like. The if does not hold. It really clearly doesn’t hold. It was enough of an out-of-nowhere strawman/non-sequitur that gjm specifically called it out as ”???” at which point Said doubled down, saying the above and also
It seems like, in your interpretation, I “should” (in some sense) be extending a hand of charity and understanding and, I dunno, helping Said to coax out his broader, potentially more valid point—helping him to get past his own strawman and on to something more steel, or at least flesh. Like, if I am reading you correctly above, you’re saying that, by focusing in on the narrow point that had been challenged by gjm and specifically reaffirmed by Said, I myself was making some sort of faux pas.
(Am I in fact reading you correctly?)
I do not think so. I think that, twenty exchanges prior, I perhaps owed Said something like that degree of care and charity and helping him avoid tying his own shoelaces together. I certainly feel I would owe it to, I dunno, Eric Rogstad or Julia Galef, and would not be the slightest bit loath to provide it.
But here, Said had just spent several thousand words the week prior, refusing to be budged from a weirdly uncharitable belief about the internals of my mind, despite that belief being incoherent with observable evidence and challenged by multiple non-me people. I don’t think it’s wise-in-the-sense-of-wisdom to a) engage with substantial charity in that situation, or b) expect someone else to engage with substantial charity in that situation.
(You can tell that my stated criteria do not rule out asking people for examples of their claims in part because I’ve written really quite a lot about what I think constitutes acceptable engagement or criticism, and I’ve just never come anywhere close to a criterion like that, nor have I ever complained about someone asking for examples unless it was after a long, long string of what felt like them repeatedly not sharing in the labor of truthseeking. Like, the closest I can think of is this thread with tailcalled, in which (I think/I hope) it’s pretty clear that what’s going on is that I was trying to cap the total attention paid to the essay and its discussion, and thus was loath to enter into something like an exchange of examples—not that it was bad in any fundamental sense for someone to want some. I did in fact provide some, a few comments deeper in the thread, though I headlined that I hadn’t spent much time on them.)
So in other words: I don’t think it was wrong to focus on the literal, actual claim that Said had made (since he made it, basically, twice in a row, affirming “no, I really mean this” after gjm’s objection and even saying that he thinks it is so obvious as to not be controversial. I don’t think I “ought” to have had a broader focus, under the circumstances—Said was making a specific, concrete, and false claim, and his examples utterly fail to back up that specific, concrete, and false claim (though I do agree with you that they back up something like his conception of our broader disagreement).
I dunno, I’m feeling kind of autistic, here, but I feel like if, on Less Wrong dot com, somebody makes a specific, concrete claim about my beliefs or policies, clarifies that yes, they really meant that claim, and furthermore says that such-and-such links are “citations for [me] expressing the sentiment [they’ve] ascribed to [me]” when they simply are not—
It feels like emphatic and unapologetic rejection should be 100% okay, and not looked at askance. The fact that they are citations supporting a different claim is (or at least, I claim, should be) immaterial; it’s not my job to steelman somebody who spent hours and hours negatively psychologizing me in public (while claiming to have no particular animus, which, boy, a carbon copy of Said sure would have had Words about).
I think there’s a thing here of standards unevenly applied; surely whatever standard would’ve had me address Said’s “real” concern would’ve also had Said behave much differently at many steps prior, possibly never strawmanning me so hard in the first place?
I think the asymmetry breaks in that, like, a bunch of people have asked Said to stop and he won’t; I’m quite eager to stop doing the conflict resolution that people don’t like, if there can pretty please be some kind of system in place that obviates it. I much prefer the world where there are competent police to the world where I have to fight off muggers in the alley—that’s why I’m trying so hard to get there to be some kind of actually legible standards rather than there always being some plausible reason why maybe we shouldn’t just say “no” to the bullshit that Zack or Said or anonymouswhoever is pulling.
Right now, though, it feels like we’ve gone from “Ben Hoffman will claim Duncan wants to ghettoize people and it’ll be left upvoted for nine days with no mod action” to “Ray will expound on why he thinks it’s kinda off for Said to be doing what he’s doing but there won’t be anything to stop Said from doing it” and I take Oli’s point about this stuff being hard and there being other priorities but like, it’s been years. And I get a stance of, like, “well, Duncan, you’re asking for a lot,” but I’m trying pretty hard to earn it, and to … pave the way? Help make the ask smaller? … with things like the old Moderating LessWrong post and the Concentration of Force post and the more recent Basics post. Like, I can’t think of much more that someone with zero authority and zero mantle can do. My problem is that abuse and strawmanning of me gets hosted on LW and upvoted on LW and people are like, well, maybe if you patiently engaged with and overturned the abuse and strawmanning in detail instead of fighting back—
I dunno. If mods would show up and be like “false” and “cut it out” I would pretty happily never get into a scrap on LW ever again.
:(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
This, more than anything else, is like “just give up and leave, this is definitely not a garden.”
I didn’t make it to every point, but hopefully you find this more of the substantive engagement you were hoping for.
At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you’re [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You’ve spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You’ve written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don’t think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you’d be much happier with my ideal, you’d think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I’m really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a “this place is really under control” kind of ideal. I take responsibility for it not being so, and I’m sorry. I wouldn’t blame you for saying this isn’t good enough and wanting to leave[1], there are some pretty bad flaws.
But sir, you impugn my and my site’s honor. This is not a perfect garden, it also not a jungle. And there is an awful lot of gardening going on. I take it very seriously that LessWrong is not just any place, and it takes ongoing work to keep it so. This is approx my full-time job (and that of others too), and while I don’t work 80-hour weeks, I feel like I put a tonne of my soul into this site.
Over the last year, I’ve been particularly focused on what I suspect are existential threats to LessWrong (not even the ideal, just the decently-valuable thing we have now). I think this very much counts as gardening. The major one over last year is how to both have all the AI content (and I do think AI is the most important topic right now) and not have it eat LessWrong and turn it into the AI-website rather than the truth-seeking/effectiveness/rationality website which is actually what I believe is its true spirit[2]. So far, I feel like we’re still failing at this. On many days, the Frontpage is 90+% AI posts. It’s not been a trivial problem for many problems.
The other existential problem, beyond the topic, that I’ve been anticipating for a long time and is now heating up is the deluge of new users flowing to the site because of the rising prominence of AI. Moderation is currently our top focus, but even before that, every day – the first thing we do when the team gets in the morning – is review every new post, all first time submissions from users, and the activity of users who are getting a lot of downvotes. It’s not exactly fun, but we do it basically everyday[3]. In the interests of greater transparency and accountability, we will soon build a Rejected Content section of the site where you’ll be able to view the content we didn’t go live, and I predict that will demonstrate just how much this garden is getting tended, and that counterfactually the quality would be a lot lot worse. You can see here a recent internal document that describes my sense of priorities for the team.
I think the discourse norms and bad behavior (and I’m willing to say now in advance of my more detailed thoughts that there’s a lot of badness to how Said behaves) are also serious threats to the site, and we do give those attention too. They haven’t felt like the most pressing threats (or for that matter, opportunities, recently), and I could be making a mistake there, but we do take them seriously. Our focus (which I think has a high opportunity cost) has been turned to the exchanges between you and Said this week, plausibly you’ve done us a service to draw our attention to behavior we should be deeming intolerable, and it’s easily 50-100 hours of team attention.
It is plausible the LessWrong team has made a mistake in not prioritizing this stuff more highly over the years (it has been years – though Said and Zack and others have in fact received hundreds of hours of attention), and there are definitely particular projects that I think turned out to be misguided and less valuable than marginal moderation would have been, but I’ll claim that it was definitely not an obvious mistake that we haven’t addressed the problems you’re most focused on.
It is actually on my radar and I’ve been actively wanted for a while a system that reliably gets the mod team to show up and say “cut it out” sometimes. I suspect that’s what should have happened a lot earlier on in your recent exchanges with Said. I might have liked to say “Duncan, we the mods certify that if you disengage, it is no mark against you” or something. I’m not sure. Ray mentioned the concept of “Maslow’s Hierarchy of Moderation” and I like that idea, and would like to get soon to the higher level where we’re actively intervening in this cases. I regret that I in particular on the team am not great at dropping what I’m doing to pivot when these threads come up, perhaps I should work on that.
I think a claim you could make is the LessWrong team should have hired more people so they could cover more of this. Arguing why we haven’t (or why Lightcone as a whole didn’t keep more team members on LessWrong team) is a bigger deal. I think things would be worse if LessWrong had been bigger most of the time, and barring unusually good candidate, it’d be bad to hire right now.
All this to say, this garden has a lot of shortcomings, but the team works quite hard to keep it at least as good as it is and try to make it better. Fair enough if it doesn’t meet your standards or not how you’d do it, perhaps we’re not all that competent, fair enough.
(And also you’ve had a positive influence on us, so your efforts are not completely in vain. We do refer to your moderation post/philosophy even if we haven’t adopted it wholesale, and make use of many of the concepts you’ve crystallized. For that I am grateful. Those are contributions I’d be sad to lose, but I don’t want to push you to offer to them to us if doing so is too costly for you.)
I will also claim though that a better version of Duncan would be better able to tolerate the shortcomings of LessWrong and improve it too; that even if your efforts to change LW aren’t working enough, there are efforts on yourself that would make you better, and better able to benefit from the LessWrong that is.
Something like the core identity of LessWrong is rationality. In alternate worlds, that is the same, but the major topic could be something else.
Over the weekend, some parts of the reviewing get deferred till the work week.
This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.
A [less straightforwardly wrong and unfair] phrasing would have been something like “this is not a Japanese tea garden; it is a British cottage garden.”
I have been to the Japanese tea garden in Portland, and found it exquisite, so I think get your referent there.
Aye, indeed it is not that.
I probably rushed this comment out the door in a “defend my honor, set the record straight” instinct that I don’t think reliably leads to good discourse and is not what I should be modeling on LessWrong.
I did, thanks.
I think gjm’s comment was missing the observation that “comment that just ask for examples” are themselves an example of “unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself”, and so it wasn’t cleanly about “balance: required or not?”. I think a reasonable reader could come away from that comment of gjm’s uncertain whether or not Said simply saying “examples?” would count as an example.
My interpretation of this section is basically the double crux dots arguing over the labels they should have, with Said disagreeing strenuously with calling his mode “unproductive” (and elsewhere over whether labor is good or bad, or how best to minimize it) and moving from the concrete examples to an abstract pattern (I suspect because he thinks the former is easier to defend than the latter).
I should also note here that I don’t think you have explicitly staked out that you think Said just saying “examples?” is bad (like, you didn’t here, which was the obvious place to), I am inferring that from various things you’ve written (and, tho this source is more suspect and so has less influence, ways other people have reacted to Said before).
Importantly, I think Said’s more valid point was narrower, not broader, and the breadth was the ‘strawmanning’ part of it. (If you mean to refer to the point dealing with the broader context, I agree with that.) The invalid “Duncan’s rule against horses” turning into the valid “Duncan’s rule against white horses”. If you don’t have other rules against horses—you’re fine with brown ones and black one and chestnut ones and so on—I think that points towards your rule against white horses pretty clearly. [My model of you thinks that language is for compiling into concepts instead of pointing at concepts and so “Duncan’s rule against horses” compiles into “Duncan thinks horses should be banned” which is both incorrect and wildly inconsistent with the evidence. I think language is for both, and when one gives you a nonsense result, you should check the other.]
I will note a way here in which it is not quite fair that I am saying “I think you didn’t do a reasonable level of interpretive labor when reading Said”, in the broader context of your complaint that Said doesn’t do much interpretive labor (deliberately!). I think it is justified by the difference in how the two of you respond to the failure of that labor.
I am trying to place the faux pas not in that you “reacted at all to that prompt” but “how you reacted to the prompt”. More in the next section.
I think this point is our core disagreement. I see the second comment saying “yeah, Duncan’s rule against horses, the thing where he dislikes white ones”, and you proceeding as if he just said “Duncan’s rule against horses.” I think there was a illusion of transparency behind “specifically reaffirmed by Said”.
Like, I think if you had said “STRAWMAN!” and tried to get us to put a scarlet S in Said’s username, this would have been a defensible accusation, and the punishment unusual but worth considering. Instead I think you said “LIAR!” and that just doesn’t line up with my reading of the thread (tho I acknowledge disagreement about the boundary between ‘lying’ and ‘strawmanning’) or my sense of how to disagree properly. In my favorite world, you call it a mislabeling and identify why you think the label fails to match (again, noting that gjm attempted to do so, tho I think not in a way that bridged the gap).
I mean, for sure I wish Said had done things differently! I described them in some detail, and not strawmanning you so hard in the first place was IMO the core one.
When I say “locally”, I am starting the clock at Killing Socrates, which was perhaps unclear.
Do you think Said would not also stop if, for every post he read on LW, he found that someone else had already made the comment he would have liked to have made?
(I do see a difference where the outcomes you seek to achieve are more easily obtained with mod powers backing them up, but I don’t think that affects the primary point.)
So, over here Elizabeth ‘summarizes’ Said in an unflattering way, and Said objects. I don’t think I will reliably see such comments before those mentioned in them do (there were only 23 minutes before Said objected) and it is not obvious to me that LW would be improved by me also objecting now.
But perhaps our disagreement is that, on seeing Elizabeth’s comment, I didn’t have a strong impulse to ‘set the record straight’; I attribute that mostly to not seeing Elizabeth’s comment as “the record,” tho I’m open to arguments that I should.
To clarify:
If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say “Examples?” would go into the pile. But just encountering a handful of comments that just say “Examples?” would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn’t do their fair share of the labor.
“Do you have examples?” is one of the core, common, prosocial moves, and correctly so. It is a bid for the other person to put in extra work, but the scales of “are we both contributing?” don’t need to be balanced every three seconds, or even every conversation. Sometimes I’m the asker/learner and you’re the teacher/expounder, and other times the roles are reversed, and other times we go back and forth.
The problem is not in asking someone to do a little labor on your behalf. It’s having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.
Said simply saying “examples?” is an example, then, but only because of the strong prior from his accumulated behavior; if the rule is something like “doing this <100x/wk is fine, doing it >100x/wk is less fine,” then the question of whether a given instance “is an example” is slightly tricky.
Yeah, you may have pinned it down (the disagreement). I definitely don’t (currently) think it’s sensible to read the second comment that way, and certainly not sensible enough to mentally dock someone for not reading it that way even if that reading is technically available (which I agree it is).
I perhaps have some learned helplessness around what I can, in fact, expect from the mod team; I claim that if I had believed that this would be received as defensible I would’ve done that instead. At the time, I felt helpless and alone*/had no expectation of mod support for reasons I think are reasonable, and so was not proceeding as if there was any kind of request I could make, and so was not brainstorming requests.
*alone vis-a-vis moderators, not alone vis-a-vis other commenters like gjm
I do think that you should put a scarlet P in Said’s username, since he’s been doing it for a couple weeks now and is still doing it (c.f. “I have yet to see any compelling reason to conclude that this [extremely unlikely on its face hypothesis] is false.”).
I again agree that this is clearly a better set of moves in some sense, but I’m thinking in a fabricated options frame and being, like, is that really actually a possible world, in that the whole problem is Said’s utterly exhausting and unrewarding mode of engagement. Like, I wonder if I might convince you that your favorite world is incoherent and impossible, because it’s one in which people are engaging in the colloquial definition of insanity and never updating their heuristics based on feedback. Or maybe you’re saying “do it for the audience and for site norms, then,” which feels less like throwing good money after bad.
But like. I think I’m getting dinged for impatience when I did not, previously, get headpats for patience? The wanted behavior feels unincentivized relative to the unwanted behavior.
No, that was pretty clear, and that’s what generated the :((((((((. The choice to start the clock there feels unfair-to-Neville, like if I were a teacher I would glance at that and say “okay, obviously this is not the local beginning” and look further.
I am wary of irresponsibly theorizing about the contents of someone else’s mind. I do think that, if one looks over the explosive proliferation of his threads once he starts a back-and-forth, it’s unlikely that there’s some state in which Said is like “ah, people are already saying all the things!” I suspect that Said (like others, to be clear; this is not precisely a criticism) has an infinite priority list, and if all the things of top priority are handled by other commenters, he’ll move down to lower ones.
I do think that if you took all of Said’s comments, and distributed 8% of them each into the corpus of comments of Julia Galef, Anna Salamon, Rob Bensinger, Scott Garrabrant, you, Eliezer Yudkowsky, Logan Brienne Strohl, Oliver Habryka, Kelsey Piper, Nate Soares, Eric Rogstad, Spencer Greenberg, and Dan Keys this would be much better. Part of the problem is the sheer concentration of princely entitlement and speaking-as-if-it-is-the-author’s-job-to-convince-Said-particularly-regardless-of-whether-Said’s-skepticism-is-a-signal-of-any-real-problem-with-the-claims.
If Kelsey Piper locally is like, buddy, you need to give me more examples, or if Spencer Greenberg locally is like, but what the heck do you even mean by “annoying,” there’s zero sense (on my part, at least) that here we go again, more taking-without-contributing. Instead, with Kelsey and Spencer it feels like a series of escalating favors and a tightening of the web of mutual obligation in which everybody is grateful to everybody else for having put in so many little bits of work here and there, of course I want to spill some words to help connect the dots for Kelsey and Spencer, they’ve spilled so many words helping me.
The pattern of “give, then take, then give, then take, then take, then take, then give, then give” is a healthy one to model, and is patriotically Athenian in the frame of my recent essay, and is not one which, if a thousand newbies were to start emulating, would cause a problem.
I don’t think that mods should be chiming in and setting the record straight on every little thing. But when, like, Said spends multiple thousands of words in a literally irrational (in the sense of not having cruxes and not being open to update and being directly contradicted by evidence) screed strawmanning me and claiming that I block people for disagreeing with my claims or criticizing my arguments—
—and furthermore when I ask for mod help—
—then I do think that a LessWrong where a mod shows up to say “false” and “actually cut it out for real” is meaningfully different and meaningfully better than the current Wild West feel where Said doesn’t get in trouble but I do.
But why should this be a problem?
Why should people say “hey, could you not, or even just a little less”? If you do something that isn’t bad, that isn’t not a problem, why should people ask you to stop? If it’s a good thing to do, why wouldn’t they instead ask you to do it more?
And why, indeed, are you still speaking in this transactional way?
If you write a post about some abstract concept, without any examples of it, and I write a post that says “What are some examples?”, I am not asking you to do labor on my behalf, I am not asking for a favor (which must be justified by some “favor credit”, some positive account of favors in the bank of Duncan). Quite frankly, I find that claim ridiculous to the point of offensiveness. What I am doing, in that scenario, is making a positive contribution to the discussion, both for your benefit and (even more importantly) for the benefit of other readers and commenters.
There is no good reason why you should resent responding to a request like “what are some examples”. There is no good reason why you should view it as an unjustified and entitled demand for a favor. There is definitely no good reason why you should view acceding to that request as being “for my benefit” (instead of, say, for your benefit, and for the benefit of readers).
(And the gall of saying “never reciprocating”, to me! When I write a post, I include examples pre-emptively, because I know that I should be asked to do so otherwise. Not “will be asked”, of course—but “should”. And when I write a post without enough examples, and someone asks for examples, I respond in great detail. Note that my responses in that thread are much, much longer than the comment which asked for examples. Of course they are! Because the question doesn’t need to be longer—but the answers do!)
(And you might say: “but Said, you barely write any posts—like one a year, at best!”. Indeed. Indeed.)
Maybe “resent” is doing most work here, but an excellent reason to not respond is that it takes work. To the extent that there are norms in place that urge response, they create motivation to suppress criticism that would urge response. An expectation that it’s normal for criticism to be a request for response that should normally be granted is pressure to do the work of responding, which is costly, which motivates defensive action in the form of suppressing criticism.
A culture could make it costless (all else equal) to ignore the event of a criticism having been made. This is an inessential reason for suppressing criticism that can be removed, and therefore should, to make criticism cheaper and more abundant.
The content of criticism may of course motivate the author of a criticized text to make further statements, but the fact of criticism’s posting by itself should not. The fact of not responding to criticism is some sort of noisy evidence of not having a good response that is feasible or hedonic to make, but that’s Law, not something that can change for the sake of mechanism design.
It’s certainly doing a decent amount of work, I agree.
Anyhow, your overall point is taken—although I have to point out that that your last sentence seems like a rebuttal of your next-to-last sentence.
That having been said, of course the content of criticism matters. A piece of criticism could simply be bad, and clearly wrong; and then it’s good and proper to just ignore it (perhaps after having made sure that an interested party could, if they so wished, easily see or learn why that criticism is bad). I do not, and would not, advocate for a norm that all comments, all critical questions, etc., regardless of their content, must always be responded to. That is unreasonable.
I also want to note—as I’ve said several times in this discussion, but it bears repeating—there is nothing problematic or blameworthy about someone other than the author of a post responding to questions, criticism, requests for examples, etc. That is fine. Collaborative development of ideas is a perfectly normal and good thing.
What that adds up to, I think, is a set of requirements for a set of social norms which is quite compatible with your suggestion of making it “costless (all else equal) to ignore the event of a criticism having been made”.
They are in opposition, but the point is that they are about different kinds of things, and one of them can’t respond to policy decisions. It’s useful to have a norm that lessens the burden of addressing criticism. It’s Law of reasoning that this burden can nonetheless materialize. The Law is implacable but importantly asymmetric, it only holds when it does, not when the court of public opinion says it should. While the norms are the other way around, and their pressure is somewhat insensitive to facts of a particular situation, so it’s worth pointing them in a generally useful direction, with no hope for their nuanced or at all sane response to details.
Perhaps the presence of Law justifies norms that are over-the-top forgiving to ignoring criticism, or find ignoring criticism a bit praiseworthy when it would be at all unpleasant not to ignore it, to oppose the average valence of Law, while of course attempting to preserve its asymmetry. So I’d say my last sentence in that comment argues that the next-to-last sentence should be stronger. Which I’m not sure I agree with, but here’s the argument.
Said, above, is saying a bunch of things, many of which I agree with, as if they are contra my position or my previous claims.
He can’t pass my ITT (not that I’ve asked him to), which means that he doesn’t understand the thing he’s trying to disagree with, which means that his disagreement is not actually pointing at my position; the things he finds ridiculous and offensive are cardboard cutouts of his own construction. More detail on that over here.
This response is manifestly untenable, given the comment of yours that I was responding to.
BTW I was surprised earlier to see you agree with the ‘relational’ piece of this comment because Duncan’s grandparent comment seems like it’s a pretty central example of that. (I view you as having more of a “visitor-commons” orientation towards LW, and Duncan has more of an orientation where this is a place where people inhabit their pairwise relationships, as well as more one-to-many relationships.)
Sorry, I’m not quite sure I follow the references here. You’re saying that… this comment… is a central example of… what, exactly?
That… seems like it’s probably accurate… I think? I think I’d have to more clearly understand what you’re getting at in your comment, in order to judge whether this part makes sense to me.
Sorry, my previous comment wasn’t very clear. Earlier I said:
and you responded with:
(and a few related comments) which made me think “hmm, I don’t think we mean the same thing by ‘relational’. Then Duncan’s comment had a frame that I would have described as ‘relational’—as in focusing on the relationships between the people saying and hearing the words—which you then described as transactional.
Ah, I see.
I think that the sense in which I would characterize Duncan’s description as “transactional” is… mostly orthogonal to the question of “is this a relational frame”. I don’t think that this has much to do with the “‘visitor commons’ vs. ‘pairwise relationships’” distinction, either (although that distinction is an interesting and possibly important one in its own right, and you’re certainly more right than wrong about where my preferences lie in that regard).
(There’s more that I could say about this, but I don’t know whether anything of importance hinges on this point. It seems like it mostly shouldn’t, but perhaps you are a better judge of that…)
A couple quick notes for now:
I agree with Duncan here it’s kinda silly to start the clock at “Killing Socrates”. Insofar as there’s a current live fight that is worth tracking separately from overall history, I think it probably starts in the comments of LW Team is adjusting moderation policy, and I think the recent-ish back and forth on Basics of Rationalist Discourse and “Rationalist Discourse” Is Like “Physicist Motors” is recent enough to be relevant (hence me including the in the OP)
I think Vaniver right now is focusing on resolving the point “is Said a liar?”, but not resolving the “who did most wrong?” question. (I’m not actually 100% sure on Vaniver’s goals/takes at the moment). I agree this is an important subquestion but it’s not the primary question I’m interested in.
I’m somewhat worried about this thread taking in more energy that it quite warrants, and making Duncan feel more persecuted than really makes sense here.
I roughly agree with Vaniver than “Liar!” isn’t the right accusation to have levied, but also don’t judge you harshly for having made it.
I think this comment of mine summarizes my relevant opinions here.
(tagging @Vaniver to make sure he’s at least tracking this comment)
Thanks.
I note (while acknowledging that this is a small and subtle distinction, but claiming that it is an important one nonetheless) that I said that I now categorize Said as a liar, which is an importantly and intentionally weaker claim than Said is a liar, i.e. “everyone should be able to see that he’s a liar” or “if you don’t think he’s a liar you are definitely wrong.”
(This is me in the past behaving in line with the points I just made under Said’s comment, about not confusing [how things seem to me] with [how they are] or [how they do or should seem to others].)
This is much much closer to saying “Liar!” than it is to not saying “Liar!” … if one is to round me off, that’s the correct place to round me off to. But it is still a rounding.
Nod, seems fair to note.
I just want to highlight this link (to one of Duncan’s essays on his Medium blog), which I think most people are likely to miss otherwise.
That is an excellent post! If it was posted on Less Wrong (
I understand why it wasn’t, of courseEDIT: I was mistaken about understanding this; see replies), I’d strong-upvote it without reservation. (I disagree with some parts of it, of course, such as one of the examples—but then, that is (a) an excellent reason to provide specific examples, and part of what makes this an excellent post, and (b) the reason why top-level posts quite rightly don’t have agree/disagree voting. On the whole, the post’s thesis is simply correct, and I appreciate and respect Duncan for having written it.)It’s not on LessWrong because of you, specifically. Like, literally that specific essay, I consciously considered where to put it, and decided not to put it here because, at the time, there was no way to prevent you from being part of the subsequent conversation.
Hmm. I retract the “I understand why it wasn’t [posted on Less Wrong]” part of my earlier comment! I definitely no longer understand.
(I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)
Said, as a quick note—this particular comment reminds me of the “bite my thumb” scene from Romeo and Juliet. To you, it might be innocuous, but to me, and I suspect to Duncan and others, it sounds like a deliberate insult, with just enough of a veil of innocence to make it especially infuriating.
I am presuming you did not actually mean this as an insult, but were instead meaning to express your genuine confusion about Duncan’s thought process. I am curious to know a few things:
Did you recognize that it sounded potentially insulting?
If so, why did you choose to express yourself in this insulting-sounding manner?
If not, does it concern you that you may not recognize when you are expressing yourself in an insulting-sounding way, and is that something you are interested in changing?
And if you didn’t know you sounded insulting, and don’t care to change, why is that?
There are some things which cannot be expressed in a non-insulting manner (unless we suppose that the target is such a saint that no criticism can affect their ego; but who among us can pretend to that?).
I did not intend insult, in the sense that insult wasn’t my goal. (I never intend insult, as a rule. What few exceptions exist, concern no one involved in this discussion.)
But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.
So, you ask:
The choice was between writing something that was necessary for the purpose of fulfilling appropriate and reasonable conversational goals, but could be written only in such a way that anyone but a saint would be insulted by it—or writing nothing.
I chose the former because I judged it to be the correct choice: writing nothing, simply in order to to avoid insult, would have been worse than writing the comment which I wrote.
(This explanation is also quite likely to apply to any past or future comments I write which seem to be insulting in similar fashion.)
I want to register that I don’t believe you that you cannot, if we’re using the ordinary meaning of “cannot”. I believe that it would be more costly for you, but it seems to me that people are very often able to express content like that in your comment, without being insulting.
I’m tempted to try to rephrase your comment in a non-insulting way, but I would only be able to convey its meaning-to-me, and I predict that this is different enough from its meaning-to-you that you would object on those grounds. However, insofar as you communicated a thing to me, you could have said that thing in a non-insulting way.
I believe you when you say that you don’t believe me.
But I submit to you that unless you can provide a rephrasing which (a) preserves all relevant meaning while not being insulting, and (b) could have been generated by me, your disbelief is not evidence of anything except the fact that some things seem easy until you discover that they’re impossible.
My guess is that you believe it’s impossible because the content of your comment implies a negative fact about the person you’re responding to. But insofar as you communicated a thing to me, it was in fact a thing about your own failure to comprehend, and your own experience of bizarreness. These are not unflattering facts about Duncan, except insofar as I already believe your ability to comprehend is vast enough to contain all “reasonable” thought processes.
Indeed, they are not—or so it would seem. So why would my comment be insulting?
After all, I didn’t write “your stated reason is bizarre”, but “I find your stated reason bizarre”. I didn’t write “it seems like your thinking here is incoherent”, but “I can’t form any coherent model of your thinking here”. I didn’t… etc.
So what makes my comment insulting?
Please note, I am not saying “my comment isn’t insulting, and anyone who finds it so is silly”. It is insulting! And it’s going to stay insulting no matter how you rewrite it, unless you either change what it actually says or so obfuscate the meaning that it’s not possible to tell what it actually says.
The thing I am actually saying—the meaning of the words, the communicated claims—imply unflattering facts about Duncan.[1] There’s no getting around that.
The only defensible recourse, for someone who objects to my comment, is to say that one should simply not say insulting things; and if there are relevant things to say which cannot be said non-insultingly, then they oughtn’t be said… and if anything is lost thereby, well, too bad.
And that would be a consistent point of view, certainly. But not one to which I subscribe; nor do I think that I ever will.
To whatever extent a reader believes that I’m a basically reasonable person, anyway. Ironically, a reader with a low opinion of me should find my comment less insulting to Duncan. Duncan himself, one might imagine, would not finding it insulting at all. But of course that’s not how people work, and there’s no point in deluding ourselves otherwise…
For what it’s worth, I don’t think that one should never say insulting things. I think that people should avoid saying insulting things in certain contexts, and that LessWrong comments are one such context.
I find it hard to square your claim that insultingness was not the comment’s purpose with the claim that it cannot be rewritten to elide the insult.
An insult is not simply a statement with a meaning that is unflattering to its target—it involves using words in a way that aggressively emphasizes the unflatteringness and suggests, to some extent, a call to non-belief-based action on the part of the reader.
If I write a comment entirely in bold, in some sense I cannot un-bold it without changing its effect on the reader. But I think it would be pretty frustrating to most people if I then claimed that I could not un-bold it without changing its meaning.
You still haven’t actually attempted the challenge Said laid out.
I’m not sure what you mean—as far as I can tell, I’m the one who suggested trying to rephrase the insulting comment, and in my world Said roughly agreed with me about its infeasibility in his response, since it’s not going to be possible for me to prove either point: Any rephrasing I give will elicit objections on both semantics-relative-to-Said and Said-generatability grounds, and readers who believe Said will go on believing him, while readers who disbelieve will go on disbelieving.
You haven’t even given an attempt at rephrasing.
Nor should I, unless I believe that someone somewhere might honestly reconsider their position based on such an attempt. So far my guess is that you’re not saying that you expect to honestly reconsider your position, and Said certainly isn’t. If that’s wrong then let me know! I don’t make a habit of starting doomed projects.
I think for the purposes of promoting clarity this is a bad rule of thumb. The decision to explain should be more guided by effort/hedonicity and availability of other explanations of the same thing that are already there, not by strategically withholding things based on predictions of how others would treat an explanation. (So for example “I don’t feel like it” seems like an excellent reason not to do this, and doesn’t need to be voiced to be equally valid.)
I think I agree that this isn’t a good explicit rule of thumb, and I somewhat regret how I put this.
But it’s also true that a belief in someone’s good-faith engagement (including an onlooker’s), and in particular their openness to honest reconsideration, is an important factor in the motivational calculus, and for good reasons.
The structure of a conflict and motivation prompted by that structure functions in a symmetric way, with the same influence irrespective of whether the argument is right or wrong.
But the argument itself, once presented, is asymmetric, it’s all else equal stronger when correct than when it’s not. This is a reason to lean towards publishing things, perhaps even setting up weird mechanisms like encouraging people to ignore criticism they dislike in order to make its publication more likely.
If you’re not even willing to attempt the thing you say should be done, you have no business claiming to be arguing or negotiating in good faith.
You claimed this was low-effort. You then did not put in the effort to do it. This strongly implies that you don’t even believe your own claim, in which case why should anyone else believe it?
It also tests your theory. If you can make the modification easily, then there is room for debate about whether Said could. If you can’t, then your claim was wrong and Said obviously can’t either.
I think it’s pretty rough for me to engage with you here, because you seem to be consistently failing to read the things I’ve written. I did not say it was low-effort. I said that it was possible. Separately, you seem to think that I owe you something that I just definitely do not owe you. For the moment, I don’t care whether you think I’m arguing in bad faith; at least I’m reading what you’ve written.
Additionally, yes, you do owe me something. The same thing you owe to everyone else reading this comment section, Said included. An actual good-faith effort to probe at cruxes to the extent possible. You have shown absolutely no sign of that in this part of the conversation and precious little of it in the rest of it. Which means that your whole side of this conversation has been weak evidence that Said is correct and you are not.
This might be true, but it doesn’t follow that anyone owes anyone anything as a result. Doing something as a result might shift the evidence, but people don’t have obligations to shift evidence.
Also, I think cultivating an environment where arguments against your own views can take root is more of an obligation than arguing for them, and it’s worth arguing against your own views when you see a clear argument pointing in that direction. But still, I wouldn’t go so far as to call even that an actual obligation.
Owing people a good-faith effort to probe at cruxes is not a result of anything in this conversation. It is universal.
You’ve said very little in a great deal of words. And, as I said initially, you haven’t even attempted this.
Forget requirement (b). You haven’t even attempted fulfilling requirement (a). And for as long as you haven’t, it is unarguably true that your disbelief is not evidence for any of your claims or beliefs.
This is the meaning of “put up or shut up”. If you want to be taken seriously, act seriously.
I more or less agree with this; I think that posting and commenting on Less Wrong is definitely a place to try to avoid saying anything insulting.
But not to try infinitely hard. Sometimes, there is no avoiding insult. If you remove all the insult that isn’t core to what you’re saying, and if what you’re saying is appropriate, relevant, etc., and there’s still insult left over—I do not think that it’s a good general policy to avoid saying the thing, just because it’s insulting.
By that measure, my comment does not qualify as an insult. (And indeed, as it happens, I wouldn’t call it “an insult”; but “insulting” is slightly different in connotation, I think. Either way, I don’t think that my comment may fairly be said to have these qualities which you list. Certainly there’s no “call to non-belief-based action”…!)
True, of course… but also, so thoroughly dis-analogous to the actual thing that we’re discussing that it mostly seems to me to be a non sequitur.
I think I disagree that your comment does not have these qualities in some measure, and they are roughly what I’m objecting to when I ask that people not be insulting. I don’t think I want you to never say anything with an unflattering implication, though I do think this is usually best avoided as well. I’m hopeful that this is a crux, as it might explain some of the other conversation I’ve seen about the extent to which you can predict people’s perception of rudeness.
There are of course more insulting ways you could have conveyed the same meaning. But there are also less insulting ways (when considering the extent to which the comment emphasizes the unflatteringness and the call to action that I’m suggesting readers will infer).
I believe that none was intended, but I also expect that people (mostly subconsciously!) interpret (a very small) one from the particular choice of words and phrasing. Where the action is something like “you should scorn this person”, and not just “this person has unflattering quality X”. The latter does not imply the former.
I think that, at this point, we’re talking about nuances so subtle, distinctions so fragile (in that they only rarely survive even minor changes of context, etc.), that it’s basically impossible to predict how they will affect any particular person’s response to any particular comment in any particular situation.
To put it another way, the variation (between people, between situations, etc.) in how any particular bit of wording will be perceived, is much greater than the difference made by the changes in wording that you seem to be talking about. So the effects of any attempt to apply the principles you suggest is going to be indistinguishable from noise.
And that means that any effort spent on doing so will be wasted.
I actually DO believe you can’t write this in not-insulting way. I find it the result of not prioritizing developing and practicing those skills in general.
while i do judge you for this, i judge you for this one time, on the meta-level, instead of judging any instance separately. as i find this behavior orderly and predictable.
If it’s really a skill issue, why hasn’t anyone done that? If it can be written in a non-insulting way, demonstrate! I submit that you cannot.
I’m curious, what do you think of these options?
Original: “I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.”
New version 1: “I can’t form any coherent model of your thinking here.”
New version 2: “I don’t understand your stated reason at all.”
New version 3: Omit that sentence.
These shift the sentence from a judgment on Duncan’s reasoning to a sharing of Said’s own experience, which (for me, at least) removes the unnecessary/escalatory part of the insult.
New version 4: “(I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here. Like, this is a statement about me, not about your thinking, but that’s where I am. I kinda wish there was a way to say this non-insultingly, but I don’t know such a way.)”
That’s still shifting to a claim about social reality and therefore not the same thing.
Experiment:
It seems to me that Czynski is just plain wrong here. But I have no expectation of changing his mind, no expectation that engaging with him will be fun or enlightening for me, and also I think he’s wrong in ways that not many bystanders will be confused about if they even see this.
If someone other than Czynski or Said would be interested in a reply to the above comment, feel free to say so and I’ll provide one.
You really have no intellectual integrity at all, do you?
Version 1 is probably not the same content, since it is mostly about the speaker, and in any case preserves most of the insultingness. Version 2 is making it entirely about the speaker and therefore definitely different, losing the important content. Version 3 is very obviously definitely not the same content and I don’t know why you bothered including it. (Best guess: you were following the guideline of naming 3 things rather than 1. If so, there is a usual lesson when that guideline fails.)
Shifting to sharing the speaker’s experience is materially different. The content of the statement was a truth claim—making it a claim about an individual’s experience changes it from being about reality to being about social reality, which is not the same thing. It is important to be able to make truth claims directly about other people’s statements, because truth claims are the building blocks of real models of the world.
Hmm interesting. I agree that there is a difference between a claim about an individual’s experience, and a claim about reality. The former is about a perception of reality, whereas the latter is about reality itself. In that case, I see why you would object to the paraphrasing—it changes the original statement into a weaker claim.
I also agree that it is important to be able to make claims about reality, including other people’s statements. After all, people’s statements are also part of our reality, so we need to be able to discuss and reason about it.
I suppose what I disagree with thus that the original statement is valid as a claim about reality. It seems to me that statements are generally/by default claims about our individual perceptions of reality. (e.g. “He’s very tall.”) A claim becomes a statement about reality only when linked (implicitly or explicitly) to something concrete. (e.g. “He’s in the 90th percentile in height for American adult males.” or “He’s taller than Daddy.” or “He’s taller than the typical gymnast I’ve trained for competitions.”)
To say a stated reason is “bizarre” is a value judgment, and therefore cannot be considered a claim about reality. This is because there is no way to measure its truth value. If bizarre means “strange/unusual”, then what exactly is “normal/usual”? How Less Wrong posters who upvoted Said’s comment would think? How people with more than 1000 karma on Less Wrong would think? There is no meaning behind the word “bizarre” except as an indicator of the writer’s perspective (i.e. what the claim is trying to say is “The stated reason is bizarre to Said”).
I suppose this also explains why such a statement would seem insulting to people who are more Duncan-like. (I acknowledge that you find the paraphrase as insulting as the original. However, since the purpose of discussion is to find a way so people who are Duncan-like and people who are Said-like can communicate and work together, I believe the key concern should be whether or not someone who is Duncan-like would feel less insulted by the paraphrase. After all, people who are Duncan-like feel insulted by different things than people who are Said-like.)
For people who are Duncan-like, I expect the insult comes about because it presents a subjective (social reality) statement in the form of an objective (reality) statement. Said is making a claim about his own perspective, but he is presenting it as if it is objective truth, which can feel like he is invalidating all other possible perspectives. I would guess that people who are more Said-like are less sensitive, either because they think it is already obvious that Said is just making a claim from his own perspective or because they are less susceptible to influence from other people’s claims (e.g. I don’t care if the entire world tells me I am wrong, I don’t ever waver because I know that I am right.)
I included Version 3 because after coming up with Version 2, I noticed it was very similar to the earlier sentence (“I definitely no longer understand.”), so I thought another valid example would be simply omitting the sentence. It seemed appropriate to me because part of being polite is learning to keep your thoughts to yourself when they do not contribute anything useful to the conversation.
somewhere (i can’t find it now) some else wrote that if he will do that, Said always can say it’s not exactly what he means.
In this case, i find the comment itself not very insulting—the insult is in the general absent of Goodwill between Said and Duncan, and in the refuse to do interpretive labor. so any comment of “my model of you was <model> and now i just confused” could have worked.
my model of Duncan avoided to post it here from the general problems in LW, but i wasn’t surprised it was specific problem. I have no idea what was Said’s model of Duncan. but, i will try, with the caveat that the Said’s model of Duncan suggested is almost certainly not true :
I though that you avoid putting it in LW because there will be strong and wrong pushback here against the concept of imaginary injury. it seem coherent with the crux of the post. now, when I learn the true, i simply confused. in my model, what you want to avoid is exactly the imaginary injury described in the post, and i can’t form coherent model of you.
i suspect Said would have say i don’t pass his ideological Turning test on that, or continue to say it’s not exact. I submit that if i cannot, it’s not writing not-insultingly, but passing his ideological turning test.
I’m not quite clear: are you saying that it’s literally impossible to express certain non-insulting meanings in a non-insulting way? Or that you personally are not capable of doing so? Or that you potentially could, but you’re not motivated to figure out how?
Edit—also, do you mean that it’s impossible to even reduce the degree to which it sounds insulting? Or are you just saying that such comments are always going to sound at least a tiny bit insulting?
This is helpful to me understanding you better. Thank you.
I… think that the concept of “non-insulting meaning” is fundamentally a confused one in this context.
Reduce the degree? Well, it seems like it should be possible, in principle, in at least some cases. (The logic being that it seems like it should be quite possible to increase the degree of insultingness without changing the substance, and if that’s the case, then one would have to claim that I always succeed at selecting exactly the least insulting possible version—without changes in substance—of any comment; and that seems like it’s probably unlikely. But there’s a lot of “seems” in that reasoning, so I wouldn’t place very much confidence in it. And I can also tell a comparably plausible story that leads to the opposite conclusion, reducing my confidence even further.)
But I am not sure what consequence that apparent in-principle truth has on anything.
Here’s a potential alternative wording of your previous statement.
Original: (I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)
New version: I am very confused by your stated reason, and I’m genuinely having trouble seeing things from your point of view. But I would genuinely like to. Here’s a version that makes a little more sense to me [give it your best shot]… but here’s where that breaks down [explain]. What am I missing?
I claim with very high confidence that this new version is much less insulting (or is not insulting at all). It took me all of 15 seconds to come up with, and I claim that it either conveys the same thing as your original comment (plus added extras), or that the difference is negligible and could be overcome with an ongoing and collegial dialog of a kind that the original, insulting version makes impossible. If you have an explanation for what of value is lost in translation here, I’m listening.
It’s certainly possible to write more words and thereby to obfuscate what you’re saying and/or alter your meaning in the direction of vagueness.
And you can, certainly, simply say additional things—things not contained in the original message, and that aren’t simply transformations of the meaning, but genuinely new content—that might (you may hope) “soften the blow”, as it were.
But all of that aside, what I’d actually like to note, in your comment, is this part:
First of all, while it may be literally true that coming up with that specific wording, with the bracketed parts un-filled-in, took you 15 seconds (if you say it, I believe it), the connotation that transmuting a comment from the “original” to the (fully qualified, as it were) “new version” takes somewhere on the order of 15 seconds (give or take a couple of factors of two, perhaps) is not believable.
Of course you didn’t claim that—it’s a connotation, not a denotation. But do you think it’s true? I don’t. I don’t think that it’s true even for you.
(For one thing, simply typing out the “fully qualified” version—with the “best shot” at explanation outlined, and the pitfalls noted, and the caveats properly caveated—is going to take a good bit longer. Type at 60 WPM? Then you’ve got the average adult beat, and qualify as a “professional typist”; but even so just the second paragraph of your comment would take you most of a minute to type out. Fill out those brackets, and how many words are you adding? 100? 300? More?)
But, perhaps more importantly, that stuff requires not just more typing, but much more thinking (and reading). What is worse, it’s thinking of a sort that is very, very likely to be a complete waste of time, because it turns out to be completely wrong.
For example, consider this attempt, by me, to describe in detail Duncan’s approach to banning people from his posts. It seemed—and still seems—to me to be an accurate characterization; and certainly it was written in such a way that I quite expected Duncan to assent to it. But instead the response was, more or less, “nah”. Now, either Duncan is lying there, and my characterization was correct but he doesn’t want to admit it; or, my characterization was wrong. In the former case I’ve mostly wasted my time; in the latter case I’ve entirely wasted my time. And this sort of outcome is ubiquitous, in my experience. Trying to guess what people are thinking, when you’re unsure or confused, is pointless. Guessing incorrectly tends to annoy people, so it doesn’t help to build bridges or maintain civility. The attempt wastes the guesser’s time and energy. It’s pretty much all downside, no upside.
If you don’t know, just say that you don’t know.
And the rest is transparent boilerplate.
This is the part I think is important in your objection—I agree with you that expanding the bracketed part would take more than 15 seconds. You’re claiming somewhere on the implicit-explicit spectrum that something substantial is lost in the translation from the original insulting version by you to the new non-insulting version by me.
I just straightforwaredly disagree with that, and I challenge you to articulate what exactly you think is lost and why it matters.
I confess that I am not sure what you’re asking.
As far as saying additional things goes—well, uh, the additional things are the additional things. The original version doesn’t contain any guessing of meaning or any kind of thing like that. That’s strictly new.
As I said, the rest is transparent boilerplate. It doesn’t much obfuscate anything, but nor does it improve anything. It’s just more words for more words’ sake.
I don’t think anything substantive is lost in terms of meaning; the losses are (a) the time and effort on the part of the comment-writer, (b) annoyance (or worse) on the part of the comment target (due to the inevitably-incorrect guessing), (c) annoyance (or worse) on the part of the comment target (due to the transparent fluff that pretends to hide a fundamentally insulting meaning).
The only way for someone not to be insulted by a comment that says something like this is just to not be insulted by what it says. (Take my word for this—I’ve had comments along these lines directed at me many, many times, in many places! I mostly don’t find them insulting—and it’s not because people who say such things couch them in fluff. They do no such thing.)
Ah, I see. So the main thing I’m understanding here is that the meaning you were trying to convey to Duncan is understood, by you, as a fundamentally insulting one. You could “soften” it by the type of rewording I proposed. But this is not a case where you mean to say something non-insulting, and it comes out sounding insulting by accident. Instead, you mean to say something insulting, and so you’re just saying it, understanding that the other person will probably, very naturally, feel insulted.
An example of saying something fundamentally insulting is to tell somebody that you think they are stupid or ugly. You are making a statement of this kind. Is that correct?
No, I don’t think so…
But this comment of yours baffles me. Did we not already cover this ground?
Then what did you mean by this:
My understanding of this statement was that you are asserting that the core meaning of the original quote by you, in both your original version and my rewrite, was a fundamentally insulting one. Are you saying it was a different kind of fundamental insult from calling somebody stupid or ugly? Or are you now saying it was not an insult?
Well, firstly—as I say here, I think that there’s a subtle difference between “insulting” and “an insult”. But that’s perhaps not the key point.
That aside, it really seems like your question is answered, very explicitly, in this earlier comment of mine. But let’s try again:
Is my comment insulting? Yes, as I said earlier, I think that it is (or at least, it would not be unreasonable for someone to perceive it thus).
(Should it be insulting? Who knows; it’s complicated. Is it gratuitously insulting, or insulting in a way that is extraneous to its propositional meaning? No, I don’t think so. Would all / most people perceive it as insulting if they were its target? No / probably, respectively. Is it possible not to be insulted by it? Yes, it’s possible; as I said earlier, I’ve had this sort of thing said to me, many times, and I have generally failed to be insulted by it. Is it possible for Duncan, specifically, to not be insulted by that comment as written by me, specifically? I don’t know; probably not. Is that, specifically, un-virtuous of Duncan? No, probably not.)
Is my comment thereby similar to other things which are also insulting, in that it shares with those other things the quality of being insulting? By definition, yes.
Is it insulting in the same way as is calling someone stupid, or calling someone ugly? No, all three of these are different things, which can all be said to be insulting in some way, but not in the same way.
OK, this is helpful.
So it sounds like you perceive your comment as conveying information—a fact or a sober judgment of yours—that will, in its substance, tend to trigger a feeling of being insulted in the other person, possibly because they are sensitive to that fact or judgment being called to their attention.
But it is not primarily intended by you to provoke that feeling of being insulted. You might prefer it if the other person did not experience the feeling of being insulted (or you might simply not care) - your aim is to convey the information, irrespective of whether or not it makes the other person feel insulted.
Is that correct?
Sounds about right.
Now that we’ve established this, what is your goal when you make insulting comments? (Note: I’ll refer to your comments as “insulting comments,” defined in the way I described in my previous comment). If you subscribe to a utilitarian framework, how does the cost/benefit analysis work out? If you are a virtue ethicist, what virtue are you practicing? If you are a deontologist, what maxim are you using? If none of these characterizes the normative beliefs you’re acting under, then please articulate what motivates you to make them in whatever manner makes sense to you. Making statements, however true, that you expect to make the other person feel insulted seems like a substantial drawback that needs some rationale.
If you care more about not making social attacks than telling the truth, you will get an environment which does not tell the truth when it might be socially inconvenient. And the truth is almost always socially inconvenient to someone.
So if you are a rationalist, i.e. someone who strongly cares about truth-seeking, this is highly undesirable.
Most people are not capable of executing on this obvious truth even when they try hard; the instinct to socially-smooth is too strong. The people who are capable of executing on it are, generally, big-D Disagreeable, and therefore also usually little-d disagreeable and often unpleasant. (I count myself as all three, TBC. I’d guess Said would as well, but won’t put words in his mouth.)
Yes, caring too much about not offending people means that people do not call out bullshit.
However, are rude environments more rational? Or do they just have different ways of optimizing for something other than truth? -- Just guessing here, but maybe disagreeable people derive too much pleasure from disagreeing with someone, or offending someone, so their debates skew that way. (How many “harsh truths” are not true at all; they are just popular because offend someone?)
(When I tried to think about examples, I thought I found one: military. No one cares about the feelings of their subordinates, and yet things get done. However, people in the military care about not offending their superiors. So, probably not a convincing example for either side of the argument.)
I’m sure there is an amount of rudeness which generates more optimization-away-from-truth than it prevents. I’m less sure that this is a level of rudeness achievable in actual human societies. And for whether LW could attain that level of rudeness within five years even if it started pushing for rudeness as normative immediately and never touched the brakes—well, I’m pretty sure it couldn’t. You’d need to replace most of the mod team (stereotypically, with New Yorkers, which TBF seems both feasible and plausibly effective) to get that to actually stick, probably, and it’d still be a large ship turning slowly.
A monoculture is generally bad, so having a diversity of permitted conduct is probably a good idea regardless. That’s extremely hard to measure, so as a proxy, ensuring there are people representing both extremes who are prolific and part of most important conversations will do well enough.
I am probably just saying the obvious here, but a rude environment is not only one where people say true things rudely, but also where people say false things rudely.
So when we imagine the interactions that happen there, it is not just “someone says the truth, ignoring the social consequences” which many people would approve, but also “someone tries to explain something complicated, and people not only respond by misunderstanding and making fallacies, but they are also assholes about it” where many people would be tempted to say ‘fuck this’ and walk away. So the website would gravitate towards a monoculture anyway.
(I wanted to give theMotte as an example of a place that is further in that direction and the quality seems to be lower… but I just noticed that the place is effectively dead.)
The concern is with requiring the kind of politeness that induces substantive self-censorship. This reduces efficiency of communicating dissenting observations, sometimes drastically. This favors beliefs/arguments that fit the reigning vibe.
The problems with (tolerating) rudeness don’t seem as asymmetric, it’s a problem across the board, as you say. It’s a price to consider for getting rid of the asymmetry of over-the-top substantive-self-censorship-inducing politeness.
The Motte has its own site now. (I agree the quality is lower than LW, or at least it was several months ago and that’s part of why I stopped reading. Though idk if I’d attribute that to rudeness.)
I do not think that is the usual result.
There’s another example, frats.
Even though the older frat members harass their subordinates via hazing rituals and so on, the new members wouldn’t stick around if they genuinely thought the older members were disagreeable people out to get them.
I write comments for many different reasons. (See this, this, etc.) Whether a comment happens to be (or be likely to be perceived as) “insulting” or not generally doesn’t change those reasons.
I do not agree.
Please see this comment and this comment for more details on my approach to such matters.
OK, I have read the comments you linked. My understanding is this:
You understand that you have a reputation for making comments perceived as social attacks, although you don’t intend them as such.
You don’t care whether or not the other person feels insulted by what you have to say. It’s just not a moral consideration for your commenting behavior.
Your aesthetic is that you prefer to accept that what you have to say has an insulting meaning, and to just say it clearly and succinctly.
Do you care about the manner in which other people talk to you? For example, if somebody wished to say something with an insulting meaning to you, would you prefer them to say it to you in the same way you say such things to others?
(Incidentally, I don’t know who’s been going through our comment thread downvoting you, but it wasn’t me. I’m saying this because I now see myself being downvoted, and I suspect it may be retaliation from you, but I am not sure about that).
I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.
Certainly I would prefer that things were otherwise. (Isn’t this often the case, for all of us?) But this cannot be a reason to avoid making such comments; to do so would be even more blameworthy, morally speaking, than is the habit on the part of certain interlocutors to take those comments as attacks in the first place. (See also this old comment thread, which deals with the general questions of whether, and how, to alter one’s behavior in response to purported offense experienced by some person.)
I don’t know if “aesthetic” is the right term here. Perhaps you mean something by it other than what I understand the term to mean.
In any case, indeed, clarity and succinctness are the key considerations here—out of respect for both my interlocutors and for any readers, who surely deserve not to have their time wasted by having to read through nonsense and fluff.
I would prefer that people say things to me in whatever way is most appropriate and effective, given the circumstances. Generally it is better to be more concise, more clear, more comprehensive, more unambiguous. (Some of those goals conflict, you may notice! Such is life; we must navigate such trade-offs.)
I have other preferences as well, though they are less important. I dislike vulgarity, for example, and name-calling. Avoiding these things is, I think, no more than basic courtesy. I do not employ them myself, and certainly prefer not to hear them addressed to me, or even in my presence. (This has never presented a problem, in either, direction, on Less Wrong, and I don’t expect this to change.) Of course one can conceive of cases when these preferences must be violated in order to serve the goals of conciseness, clarity, etc.; in such a case I’d grin and bear it, I suppose. (But I can’t recall encountering such.)
Now that I’ve answered your questions, here’s one of my own:
What, exactly, is the point of this line of questioning? We seem to be going very deep down this rabbit hole, litigating these baroque details of connotation and perception… and it seems to me that nothing of any consequence hinges on any of this. What makes this tangent even slightly worth either my time or yours?
Just a small note that “Said interpreting someone as [interpreting Said’s comment as an attack]” is, in my own personal experience, not particularly correlated with [that person in fact having interpreted Said’s comment as an attack].
Said has, in the past, seemed to have perceived me as perceiving him as attacking me, when in fact I was objecting to his comments for other reasons, and did not perceive them as an attack, and did not describe them as attacks, either.
The comment you quoted was not, in fact, about you. It was about this (which you can see if you read the thread in which you’re commenting).
Note that in the linked discussion thread, it is not I, but someone else, who claims that certain of my comments are perceived as attacks.
In short, your comment is a non sequitur in this context.
No, it’s relevant context, especially given that you’re saying in the above ~[and I judge people for it].
(To be clear, I didn’t think that the comment I quoted was about me. Added a small edit to make that clearer.)
I wrote about five paragraphs in response to this, which I am fine with sharing with you on two conditions. First, because my honest answer contains quite a bit of potentially insulting commentary toward you (expressed in the same matter of fact tone I’ve tried to adopt throughout our interaction here), I want your explicit approval to share it. I am open to not sharing it, DMing it to you, or posting it here.
Secondly, if I do share it, I want you to precommit not to respond with insulting comments directed at me.
This seems like a very strange, and strangely unfair, condition. I can’t make much sense of it unless I read “insulting” as “deliberately insulting”, or “intentionally insulting”, or something like it. (But surely you don’t mean it that way, given the conversational context…?)
Could you explain the point of this? I find that I’m increasingly perplexed by just what the heck is going on in this conversation, and this latest comment has made me more confused than ever…
Yes, it’s definitely an unfair condition, and I knew that when I wrote it. Nevertheless—that is my condition.
If you would prefer a vague answer with no preconditions, I am satisfying my curiosity about somebody who thinks very differently about commenting norms than I do.
Alright, thanks.
I did (weak-)downvote one comment of yours in this comment section, but only one. If you’re seeing multiple comments downvoted, then those downvotes aren’t from me. (Of course I don’t know how I’d prove that… but for whatever my word’s worth, you have it.)
I believe you, and it doesn’t matter to me. I just didn’t want you to perceive me incorrectly as downvoting you.
I like the norm of discussing a hypothetical interpretation you find interesting/relevant, without a need to discuss (let alone justify) its relation to the original statement or God forbid intended meaning. If someone finds it interesting to move the hypothetical in another direction (perhaps towards the original statement, or even intended meaning), that is a move of the same kind, not a move of a different and privileged kind.
I agree that this can often be a reasonable and interesting thing to do.
I would certainly not support any such thing becoming expected or mandatory. (Not that you implied such a thing—I just want to forestall the obvious bad extrapolation.)
Do you mean that you don’t support the norm of it not being expected for hypothetical interpretations of statements to not needing to justify themselves as being related to those statements? In other words, that (1) you endorse the need to justify discussion of hypothetical interpretations of statements by showing those interpretations to be related to the statements they interpret, or something like that? Or (2) that you don’t endorse endless tangents becoming the norm, forgetting about the original statement? The daisy chain is too long.
It’s unclear how to shape the latter option with policy. For the former option, the issue is demand for particular proof. Things can be interesting for whatever reason, doesn’t have to be a standard kind of reason. Prohibiting arbitrary reasons is damaging to the results, in this case I think for no gain.
No, absolutely not.
Yeah.
My view is that first it’s important to get clear on what was meant by some claim or statement or what have you. Then we can discuss whatever. (If that “whatever” includes some hypothetical interpretation of the original (ambiguous) claim, which someone in the conversation found interesting—sure, why not.) Or, at the very least, it’s important to get that clarity regardless—the tangent can proceed in parallel, if it’s something the participants wish.
EDIT: More than anything, what I don’t endorse is a norm that says that someone asking “what did you mean by that word/phrase/sentence/etc.?” must provide some intepretation of their own, whether that be a guess at the OP’s meaning, or some hypothetical, or what have you. Just plain asking “what did you mean by that?” should be ok!
Totally agreed.
(Expanding on this comment)
The key thing missing from your account of my views is that while I certainly think that “local validity checking” is important, I also—and, perhaps, more importantly—think that the interactions in question are not only fine, but good, in a “relational” sense.
So, for example, it’s not just that a comment that just says “What are some examples of this?” doesn’t, by itself, break any rules or norms, and is “locally valid”. It’s that it’s a positive contribution to the discussion, which is aimed at (a) helping a post author to get the greatest use out of his post and the process and experience of posting it, and (b) helping the commentariat get the greatest use out of the author’s post. (Of course, (b) is more important than (a)—but they are both important!)
Some points that follow from this, or depend on this:
First, such contributions should be socially rewarded to the degree that they are necessary. By “necessary”, here, I mean that if it is the case that some particular sort of criticism or some particular sort of question is good (i.e., it contributes substantially to how much use can be gotten out of a post), but usually nobody asks that sort of question or makes that sort of criticism, then anyone who does do that, should be seen as making not only a good but a very important contribution. (And it’s a bad sign when this sort of thing is common—it means that at least some sorts of important criticisms, or some sorts of important questions, are not asked nearly often enough!)
Meanwhile, asking a sort of question or making a sort of criticism which is equally good but is usually or often made, such that it is fairly predictable and authors can, with decent probability, expect to get it, then such a question or criticism is still good and praiseworthy, but not individually as important (though of course still virtuous!).
In the limit, an author will know that if they don’t address something in their post, somebody will ask about it, or comment on it. (And note that it’s not always necessary, in such a case, to anticipate a criticism or question in your post, even if you expect it will be made! You can leave it to the comments, being ready to respond to it if it’s brought up—or proactively bringing it up yourself, filling the role of your own devil’s advocate.)
In other words—
And this is a good thing. If you posit some abstraction in your post, you should think “they’re gonna ask me for examples in the comments”. (It’s a bad sign, again, if what you actually think is “Said Achmiz is gonna ask me for examples in the comments”!) And this should make you think about whether you have examples; and what those examples demonstrate; or, if you don’t have any, what that means; etc.
And the same goes for many other sorts of questions one could ask, or criticisms one could make.
(Relatedly: I, too, want to “build up a context in which people can hold each other accountable”. But what exactly do you think that looks like?)
Second, it is no demerit to a post author, if one commenter asks a question, and another commenter answers it, without the OP’s involvement (or perhaps with merely a quick note saying “endorsed!”). Indeed it’s no demerit to an author, even, if questions are asked, or criticisms made, in the comments, to which the OP has no good answer, but which are answered satisfactorily by others, such that the end result is that knowledge and understanding are constructed by a collective effort that results in even the author of the post, himself, learning something new!
This, by the way, is related to the reasons why I find the “authors can ban people from their posts” thing so frustrating and so thoroughly counterproductive. If I write a comment under someone’s post, about someone’s post, certainly there’s an obvious sense in which it’s addressed to the author of the post—but it’s not just addressed to them! If I wanted to talk to someone one-on-one, I could send a private message… but unless I make a point of noting that I’m soliciting the OP’s response in particular (and even then, what’s to stop anyone else from answering anyway?), or ask for something that only the OP would know… comments / questions are best seen as “put to the whole table”, so to speak. Yes, if the post author has an answer they think is appropriate to provide, they can, and should, do that. But so can and should anyone else!
It’s no surprise that, as others have noted, the comments section of a post is, not infrequently, at least as useful as the post itself. And that is fine! It’s no indictment of a post’s author, when that turns out to be the case!
The upshot of this point and the previous one is that in (what I take to be) a healthy discussion environment, when someone writes a comment under your post that just says, for instance, “What are some examples of this?”, there is no good reason why that should contribute to any “relational” difficulties. It is the sort of thing that helps to make posts useful, not just to the commentariat as a whole but also to those posts’ authors; and the site is better if people regularly make such comments, ask such questions, pose such criticisms.
And, thus: third, if someone finds that they react to such engagement as if it were some sort of attack, annoyance, problem, etc., that is a bug, and one which they should want to fix. Reacting to a good thing as if it were a bad thing is, quite simply, a mistake.
Note, again, that the question isn’t whether some particular comment is “locally valid” in an “atomic” sense while being problematic in a “relational” sense. The question, rather, is whether the comment is simply good (in a “relational” sense or in any other sense), but is being mistakenly reacted to as though it were bad.
Thank you for laying out your reasoning.
I don’t have any strong objections to any of this (various minor ones, but that’s to be expected)…
… except the last paragraph (#5, starting with “I think Said is trying to figure out …”). There I think you importantly mis-characterize my views; or, to be more precise, you leave out a major aspect, which (in addition to being a missing key point), by its absence colors the rest of your characterization. (What is there is not wrong, per se, but, again, the missing aspect makes it importantly misleading.)
I would, of course, normally elaborate here, but I hesitate to end up with this comment thread/section being filled with my comments. Let me know if you want me to give my thoughts on this in detail here, or elsewhere.
(EDIT: Now expanded upon in this comment.)
I would appreciate more color on your views; by that point I was veering into speculation and hesitant to go too much further, which naturally leads to incompleteness.
By the way, I will note that I am both quite surprised and, separately, something like dismayed, at how devastatingly effective has been what I will characterize as “Said’s privileging-the-hypothesis gambit.”
Like, Said proposed, essentially, “Duncan holds a position which basically no sane person would advocate, and he has somehow held this position for years without anyone noticing, and he conspicuously left this position out of his very-in-depth statement of his beliefs about discourse norms just a couple of months ago”
and if I had realized that I actually needed to seriously counter this claim, I might have started with “bro do you even Bayes?”
(Surely a reasonable prior on someone holding such a position is very very very low even before taking into account the latter parts of the conjunction.)
Like, that Vaniver would go so far as to take the hypothesis
and then go sifting through the past few comments with an eye toward using them to distinguish between “true” and “false” is startling to me.
The observation “Duncan groused at Said for doing too little interpretive and intellectual labor relative to that which he solicited from others” is not adequate support for “Duncan generally thinks that asking for examples is unacceptable.” This is what I meant by the strength of the phrase “blatant falsehood.” I suppose if you are starting from “either Mortimer Snodgrass did it, or not,” rather than from “I wonder who did the murder,” then you can squint at my previous comments—
(including the one that was satirical, which satire, I infer from Vaniver pinging me about my beliefs on that particular phrase offline, was missed)
—and see in them that the murderer has dark hair, and conclude from Mortimer’s dark hair that there should be a large update toward his guilt.
But I rather thought we didn’t do that around here, and did not expect anyone besides Said to seriously entertain the hypothesis, which is ludicrous.
(I get that Said probably genuinely believed it, but the devout genuinely believe in their gods and we don’t give them points for that around here.)
Again, just chiming in, leaving the actual decision up to Ray:
My current take here is indeed that Said’s hypothesis, taking fully literal and within your frame was quite confused and bad.
But also, like, people’s frames, especially in the domain of adversarial actions, hugely differ, and I’ve in the past been surprised by the degree to which some people’s frames, despite seeming insane and gaslighty to me at first turned out to be quite valuable. Most concretely I have in my internal monologue indeed basically fully shifted towards using “lying” and “deception” the way Zack, Benquo and Jessica are using it, because their concept seems to carve reality at its joints much better than my previous concept of lying and deception. This despite me telling them many times that their usage of those terms is quite adversarial and gaslighty.
My current model is that when Said was talking about the preference he ascribes to you, there is a bunch of miscommunication going on, and I probably also have deep disagreements with his underlying model, but I have updated against trying to stamp down on that kind of stuff super hard, even if it sounds quite adversarial to me on first glance.
This might be crazy, and maybe making this a moderation policy would give rise to all kinds of accusations thrown around and a ton of goodwill being destroyed, but I currently generally feel more excited about exploring different people’s accusations of adversarialness in a bunch of depth, even if they seem unlikely on the face of it. This is definitely also partially driven by my thoughts on FTX, and trying to somehow create a space where more uncharitable/adversarial accusations could have been brought up somehow.
But this is really all very off-the-cuff and I have thought about this specific situation and the relevant thread much less than Ray and Ruby have, so I am currently leaving the detailed decisions up to them. But seemed potentially useful to give some of my models here.
I think you are mistaken about the process that generated my previous comment; I would have preferred a response that engaged more with what I wrote.
In particular, it looks to me like you think the core questions are “is the hypothesis I quote correct? Is it backed up by the four examples?”, and the parent comment looks to me like you wrote it thinking I thought the hypothesis you quote is correct and backed up by the examples. I think my grandparent comment makes clear that I think the hypothesis you quote is not correct and is not backed up by the four examples.
Why does the comment not just say “Duncan is straightforwardly right”? Well, I think we disagree about what the core questions are. If you are interested in engaging with that disagreement, so am I; I don’t think it looks like your previous comment.
(I intended to convey with “by the way” that I did not think I had (yet) responded to the full substance of your comment/that I was doing something of an aside.)
I plan to just leave/not post essays here anymore if this isn’t fixed. LW is a miserable place to be, right now. ¯\_(ツ)_/¯
(I also said the following in a chat with several of the moderators on 4/8: > I spent some time wondering if I would endorse a LW where both Duncan and Said were banned, and my conclusion was “yes, b/c that place sounds like it knows what it’s for and is pruning and weeding accordingly.”)
I note that this is leaving out recent and relevant background mentioned in this comment.
I don’t keep track of people’s posting styles and correlate them with their names very well. Most people who post on LW, even if they do it a lot, I have negligible associations beyond “that person sounds vaguely familiar” or “are they [other person] or am I mixing them up?”.
I have persistent impressions of both Said and Duncan, though.
I am limited in my ability to look up any specific Said comment or things I’ve said elsewhere about him because his name tragically shares a spelling with a common English word, but my model of him is strongly positive. I don’t think I’ve ever read a Said comment and thought it was a waste of time, or personally bothersome to me, or sneaky or pushy or anything.
Meanwhile I find Duncan vaguely fascinating like he is a very weird bug which has not, yet, sprayed me personally with defensive bug juice or bitten me with its weird bug pincers. Normally I watch him from a safe distance and marvel at how high a ratio of “incredibly suspicious and hackle-raising” to “not often literally facially wrong in any identifiable ways” he maintains when he writes things. It’s not against any rules to be incredibly suspicious and hackle-raising in a public place, of course, it just means that I don’t invite him to where I’m at. But if he’s coming into conflict with, not just Said, but Said’s presence on LW, I fear I must venture closer to the weird bug.
I’m a big believer in social incompatibility. Some people just don’t click! It’s probably not inherently impossible to navigate but it’s almost never worth the trouble. Duncan shouldn’t have to interact with Said if he doesn’t want to.
Also, being the kind of person who has any social conflicts like that, let alone someone as prone as Duncan is, to my mind fundamentally disqualifies them from claiming to be objective, taking on public-facing moderator-like roles, etc. I myself am not qualified for these roles! I run a walled garden Discord server that only has people I am chill with and don’t pretend to be fair about it. But I also don’t write LW posts about how people I don’t like are unsuited for polite society. I support the notion of simply not allowing authoritative posturing about norms like Duncan often does on LW.
I don’t know[1] for sure what purpose this analogy is serving in this comment, and without it the comment would have felt much less like it was trying to hijack me into associating Duncan with something viscerally unpleasant.
My guess is that it’s meant to convey something like your internal emotional experience, with regards to Duncan, to readers.
I think weird bugs are neat.
I wasn’t sure if I should include the analogy. I came up with it weeks ago when I was remarking to people in my server about how suspicious I find things Duncan writes, and it was popular there; I guess people here are less universally delighted by metaphors about weird bugs than people on my server, whoops! For what it’s worth I think the world is enriched by the presence of weird bugs. The other day someone remarked that they’d found a weird caterpillar on the sidewalk near my house and half my dinner guests got up to go look at it and I almost did myself. I just don’t want to touch weird bugs, and am nervous in a similar way about making it publicly knowable that I have an opinion about Duncan.
I’ve tried for a bit to produce a useful response to the top-level comment and mostly failed, but I did want to note that
“Oh, it sort of didn’t occur to me that this analogy might’ve carried a negative connotation, because when I was negatively gossiping about Duncan behind his back with a bunch of other people who also have an overall negative opinion of him, the analogy was popular!”
is a hell of a take. =/
Oh, no, it’s absolutely negative. I don’t like you. I just don’t specifically think that you are disgusting, and it’s that bit of the reaction to the analogy that caught me by surprise.
“Oh, I’m going to impute malice with the phrase ‘gossiping behind my back’ about someone I have never personally interacted with before who talked about my public blog posts with her friends, when she’s specifically remarked that she’s worried about fallout from letting me know that she doesn’t care for me!” is also kind of a take, and a pretty good example of why I don’t like you. I retract the tentative positive update I made when your only reaction to my comment had been radio silence; I’d found that really encouraging wrt it being safe to have opinions about you where you might see them, but no longer.
It is only safe for you to have opinions if the other people don’t dislike them?
I think you’re trying to set up a really mean dynamic where you get to say mean things about me in public, but if I point out anything frowny about that fact you’re like “ah, see, I knew that guy was Bad; he’s making it Unsafe for me to say rude stuff about him in the public square.”
(Where “Unsafe” means, apparently, “he’ll respond with any kind of objection at all.” Apparently the only dynamic you found acceptable was “I say mean stuff and Duncan just takes it.”)
*shrug
I won’t respond further, since you clearly don’t want a big back-and-forth, but calling people a weird bug and then pretending that doesn’t in practice connote disgust is a motte and bailey.
I kind of doubt you care at all, but here for interested bystanders is more information on my stance.
I suspect you of brigading-type behavior wrt conflicts you get into. Even if you make out like it’s a “get out the vote” campaign where the fact that rides to the polls don’t require avowing that you’re a Demoblican is important to your reception, when you’re the sort who’ll tell all your friends someone is being mean to you and then the karma swings around wildly I make some updates. This social power with your clique of admirers in combination with your contagious lens on the world that they pick up from you is what unnerves me.
I experience a lot of your word choices (e.g. “gossiping behind [your] back”) as squirrelly[1] , manipulative, and more rhetoric than content. I would not have had this experience in this particular case if, for example, you’d said “criticizing [me] to an unsympathetic audience”. Gossip behind one’s back is a social move for a social relationship. One doesn’t clutch one’s pearls about random people gossiping about Kim Kardashian behind her back. We have never met. I’d stand a better chance of recognizing Ms. Kardashian in the grocery store than you. I have met some people who know some people who you hang out with, but it’s disingenuous to suggest that I had any affordances to instead gossip to your face, or that it’s mean to dislike your public blog posts and then talk about disliking them with my friends[2].
Further, it’s rhetorically interesting that you said “Apparently the only dynamic you found acceptable was “I say mean stuff and Duncan just takes it.”″ You didn’t try a lot of different dynamics! I said I was favorably impressed when you didn’t respond. If someone is nervous about you, holding very still and not making any hostile moves is a great way to help them feel safe, and when you tried that (or… looked like you were trying it) it worked. The only other thing you tried was, uh, this, which, as I’m explaining here, I do not find impressive. However, scientists have discovered that there are often more than two possible approaches to social conflict. You could have tried something else! Maybe you could have dug up a mutual friend who’d mediate, or asked a neutral curious question about whether there was something I could point to that would help you understand why you were coming off badly, instead of unloading a dump truck of sneaky nasty connotations on my lap. Maybe you believe every one of those connotations in your heart of hearts. This does not imbue your words with magic soothing power, any more than my intentions successfully accompanied my analogy about weird bugs. You still seem sneaky and nasty to me.
I maintain that I sincerely like squirrels; I am using a colloquial definition which, of definitions I found on the internet, most closely matches the Urban Dictionary cluster.
The “I talk about things with my friends, you brigade” conjugation is not lost on me but I wish to point out in my defense that, as I said in my original comment, I did not intend to touch this situation where it could possibly affect you until it seemed like it was also affecting Said, of whom I am fond.
Positive reinforcement for disengaging!
It doesn’t seem like too many people had a reaction similar to mine, so I don’t know that you were especially miscalibrated. (On reflection, I think the “bug” part is maybe only half of what I found disagreeable about the analogy. Not sure this is worth the derailment.)
For what it’s worth, I had a very similar reaction to yours. Insects and arthropods are a common source of disgust and revulsion, and so comparing anyone to an insect or an arthropod, to me, shows that you’re trying to indicate that this person is either disgusting or repulsive.
I’m sorry! I’m sincerely not trying to indicate that. Duncan fascinates and unnerves me but he does not revolt me. I think that “weird bug” made sense to my metaphor generator instead of “weird plant” or “weird bird” or something is that bugs have extremely widely varying danger levels—an unfamiliar bug may have all kinds of surprises in the mobility, chemical weapons, aggressiveness, etc. department, whereas plants reliably don’t jump on you and birds are basically all just WYSIWYG; but many weird bugs are completely harmless, and I simply do not know what will happen to me if I poke Duncan.
What about “weird frog”? Frogs don’t have the same negative connotations as bugs and they have the same wide range of danger levels.
I think most poisonous frogs look it and would accordingly pick up a frog that wasn’t very brightly colored if I otherwise wanted to pick up this frog, whereas bugs may look drab while being dangerous.
Poisonous frogs often have bright colors to say “hey don’t eat me”, but there are also ones that use a “if you don’t notice me you won’t eat me” strategy. Ex: cane toad, pickerel frog, black-legged poison dart frog.
Welp, guess I shouldn’t pick up frogs. Not what I expected to be the main takeaway from this thread but still good to know.
Don’t pick up amphibians, or anything else with soft porous skin, in general, unless your sure.
...why do they bother being poisonous then tho?
I believe it: https://slatestarcodex.com/2017/10/02/different-worlds/
I liked the analogy and I also like weird bugs
Yup, I strongly agree with this.
And it seems to me that the effort spent moderating this is mostly going to be consequential for Duncan and Said’s future interactions instead of generalizing and being consequential to the interactions between other people on LessWrong, because these sorts of conflicts seem to be quite infrequent. If so, it doesn’t seem worth spending too much time on.
Maybe as a path forward, Duncan and Said can agree to keep exchanges to a maximum of 10 total comments and subsequently move the conversation to a private DM, see if that works, and if it doesn’t re-evaluate from there?
I have not read all the words in this comment section, let alone in all the linked posts, let alone in their comments sections, but/and—it seems to me like there’s something wrong with a process that generates SO MANY WORDS from SO MANY PEOPLE and takes up SO MUCH PERSON-TIME for what is essentially two people not getting along. I get that an individual social conflict can be a microcosm of important broader dynamics, and I suspect that Duncan and/or Said might find my “not getting along” summary trivializing, which may even be true, as noted I haven’t read all the words—just, still, is this really the best thing for everyone involved to be doing with their time?
It is already happening, so the choices are either one big thread, or dozen (not much) smaller ones.
Or at least, if there’s something so compelling-in-some-way going on for some people that they want to keep engaging, at least we could hope that somehow they could be facilitated in doing mental work that will be helpful for whatever broader things there are. Like, if it’s a microcosm of stuff, if it represents some important trends, if there’s something important but hard to see without trying really hard, then it might be good for them to focus on that rather than being in a fight. (Of course, easier said than done(can); a lot of the ink spilled will feel like trying to touch on the broader things, but only some of it actually will.)
This seems like a situation that is likely to end up ballooning into something that takes up a lot of time and energy. So then, it seems worth deciding on an “appetite” up front. Is this worth an additional two hours of time? Six? Sixty? Deciding on that now will help avoid a scenario where (significantly) more time is spent than is desirable.
Here is some information about my relationship with posting essays and comments to LessWrong. I originally wrote it for a different context (in response to a discussion about how many people avoid LW because the comments are too nitpicky/counterproductive) so it’s not engaging directly with anything in the OP, but @Raemon mentioned it would be useful to have here.
*
I *do* post on LW, but in a very different way than I think I would ideally. For example, I can imagine a world where I post my thoughts piecemeal pretty much as I have them, where I have a research agenda or a sequence in mind and I post each piece *as* I write it, in the hope that engagement with my writing will inform what I think, do, and write next. Instead, I do a year’s worth of work (or more), make a 10-essay sequence, send it through many rounds of editing, and only begin publishing any part of it when I’m completely done, having decided in advance to mostly ignore the comments.
It appears to me that what I write is strongly in line with the vision of LW (as I understand it; my understanding is more an extrapolation of Eliezer’s founding essays and the name of the site than a reflection of discussion with current mods), but I think it is not in line with the actual culture of LW as it exists. A whole bunch of me does not want to post to LW at all and would rather find a different audience for my work, one where I feel comfortable and excited and surrounded by creative peers who are jamming with each other and building things together or something. But I don’t know of any such place that meets my standards in all the important ways, and LW seems like the place where my contributions are most likely to gradually drag the culture in a direction where I’ll actually *enjoy* posting there, instead of feeling like I’m doing a scary unpleasant diligence thing. (Plus I really believe in the site’s underlying vision!)
Sometimes people do say cool interesting valuable-to-me things under my posts. But it’s pretty rare, and I’m always surprised when this happens. Mostly my posts get not much engagement, and the engagement they do get feels a whole lot to me like people attempting to use my post as an opportunity to score points in one way or another, often by (apparently) trying to demonstrate that they’re ahead of me in some way while also accidentally demonstrating that have probably not even tried to hear me.
My perception is very likely skewed here, but my impression is that the median comment on LW is along the lines of “This is wrong/implausible/inadequate because X.” The comments I *want* are more like, “When I thought about/tried this for five minutes, here is what happened, and here is how I’m thinking about that, and I wonder x y and z.”
Here is a comment thread that demonstrates what it looks like when *I* think that an interesting-to-me post is inadequate/not quite right. I’m not saying commenters in general should be held to this ridiculous standard, I’m just saying, “Here’s a shining example of the kind of thing that is possible, and I really want the world to move in this direction, especially in response to my posts”, or something. (However apparently it wasn’t considered particularly valuable commentary by readers *shrug*.)
Raymond has been trying to get me to post my noticing stuff from Agenty Duck to LW for *years*, or even to let *him* cross post it for me. And I keep saying “no” or “not yet”, because the personal consequences I imagine for me are mostly bad, and I just think I need to make something good enough to outweigh that first. It’s just now, after literally five to ten years of further development, that I’ve gotten that material into a shape where I think the benefit to the world and my local social spaces (and also my bank account) outweighs the personal unpleasantness of posting the stuff to LW.
(This is just one way of looking at it. The full story is a lot bigger and more complicated, I think.)
I also have the sense that most posts don’t get enough / any high-quality engagement, and my bar for such engagement is likely lower than yours.
I suspect though that the main culprit here is not the site culture, but instead a bunch of related reasons: the sheer amount of words on the site and in each essay, which cause the readership to spread out over a gigantic corpus of work; standard Internet engagement patterns (only a small fraction of readers write comments, and only a small fraction of those are high-quality); median LW essays receive too few views to produce healthy discussions; high-average-quality commenters are rare on the Internet, and their comments are spread out over everything they read; imperfect karma incentives; etc.
Are there ways for individuals to reliably get a number of comments sufficiently large to produce the occasional high-quality engagement? The only ways I’ve seen are for them to either already be famous essayists (e.g. the comments sections on ACX or Slow Boring are sufficiently big to contain the occasional gem), or to post in their own Facebook community or something. Feed-like sites like Facebook suffer from their recency bias, however, which is kind of antithetical to the goal of writing truth-seeking and timeless essays.
Strong agree. Though I also engage in the commenting behavior, at an uncharitable view of my behavior.
One can dream of some genius cracking the filtering problem and creating a criss-crossing tesseract of subcultures that can occupy the same space (e.g. LW) but go off in their own shared-goals directions (those people who jam and analyze with each other; those people who carefully nitpick and verify; those people who gather facts; those people who just vibe; …).
Skimmed all the comments here and wanted to throw in my 2c (while also being unlikely to substantively engage further, take that into account if you’re thinking about responding):
It seems to me that people should spend less time litigating this particular fight and more time figuring out the net effects that Duncan and Said have on LW overall. It seems like mods may be dramatically underrating the value of their time and/or being way too procedurally careful here, and I would like to express that I’d support them saying stuff like “idk exactly what went wrong but you are causing many people on our site (including mods) to have an unproductive time, that’s plenty of grounds for a ban”.
It seems to me that many (probably most) people who engage with Said will end up having an unproductive and unpleasant time. So then my brain started generating solutions like “what if you added a flair to his comments saying ‘often unproductive to engage’” and then I was like “wait this is clearly a missing stair situation (in terms of the structural features not the severity of the misbehavior) and people are in general way too slow to act on those; at the point where this seems like a plausibly-net-positive intervention he should clearly just be banned”.
It seems to me that Duncan has very strong emotional reactions about which norms are used, and how they’re used, and that his preferred norms seem pretty bizarre to many people (I relate to several of Alicorn’s reactions to him, including “marvel at how high a ratio of “incredibly suspicious and hackle-raising” to “not often literally facially wrong in any identifiable ways”″) and again the solution my brain generated was to have some kind of flair like ‘often dies on the hill of unusual discourse norms’ (this is a low-effort phrasing that’s directionally correct but there’s probably a much better one) and then I was like “wait this is another missing stair situation”. But it feels like there’s plausibly an 80⁄20 solution here where Duncan can still post his posts (with some kind of “see my profile for a disclaimer about discourse norms” header) but not comment on other people’s.
I say all this despite agreeing with Said’s pessimism about the quality of most LW content. I just don’t think there’s any realistic world in which commenting pessimistically on lots of stuff in the way that Said does actually helps with that, but it does hurt the few good things. Wei Dai had a comment below about how important it is to know whether there’s any criticism or not, but mostly I don’t care about this either because my prior is just that it’s bad whether or not there’s criticism. In other words, I think the only good approach here is to focus on farming the rare good stuff and ignoring the bad stuff (except for the stuff that ends up way overrated, like (IMO) Babble or Simulators, which I think should be called out directly).
But how do you find the rare good stuff amidst all the bad stuff? I tend to do it with a combination of looking at karma, checking the comments to see whether or not there’s good criticism, and finally reading it myself if it passes the previous two filters. But if a potentially good criticism was banned or disincentivized, then that 1) causes me to waste time (since it distorts both signals I rely on), and 2) potentially causes me to incorrectly judge the post as “good” because I fail to notice the flaw myself. So what do you do such that it doesn’t matter whether or not there’s criticism?
My approach is to read the title, then if I like it read the first paragraph, then if I like that skim the post, then in rare cases read the post in full (all informed by karma).
I can’t usually evaluate the quality of criticism without at least having skimmed the post. And once I’ve done that then I don’t usually gain much from the criticisms (although I do agree they’re sometimes useful).
I’m partly informed here by the fact that I tend to find Said’s criticisms unusually non-useful.
Thanks for weighing in! Fwiw I’ve been skimming but not particularly focused on the litigation of the current dispute, and instead focusing on broader patterns. (I think some amount of litigation of the object level was worth doing but we’re past the point where I expect marginal efforts there to help)
One of the things that’s most cruxy to me is what people who contribute a lot of top content* feel about the broader patterns, so, I appreciate you chiming in here.
*roughly operationalized as “write stuff that ends up in the top 20 or top 50 of the annual review”
Makes sense.
FYI I personally haven’t had bad experiences with Said (and in fact I remember talking to mods who were at one point surprised by how positively he engaged with some of my posts). My main concern here is the missing stair dynamic of “predictable problem that newcomers will face”.
You know, I’ve seen this sort of characterization of my commenting activity quite a few times in these discussions, and I’ve mostly shrugged it off; but (with apologies, as I don’t mean to single you out, and indeed you’re one of the LW members whom I respect significantly more than average) I think at this point I have to take the time to address it.
My objection is simply this:
Is it actually true that I “comment pessimistically on lots of stuff”? Do I do this more than other people?
There are many ways of operationalizing that, of course. Here’s one that seems reasonable to me: let’s find all the posts (not counting “meta”-type posts that are already about me, or referring to me, or having to do with moderation norms that affect me, etc.) on which I’ve commented “pessimistically” in, let’s say, the last six months, and see if my comments are, in their level of “pessimism”, distinguishable from those of other commenters there; and also what the results of those comments turn out to be.
#1: https://www.lesswrong.com/posts/Hsix7D2rHyumLAAys/run-posts-by-orgs
Multiple people commenting in similarly “pessimistic” ways, including me. The most, shall we say, vigorous, discussion that takes place there doesn’t involve me at all.
#2: https://www.lesswrong.com/posts/2yWnNxEPuLnujxKiW/tabooing-frame-control
My overall view is certainly critical, but here I write multiple medium-length comments, which contain substantive analyses of the concept being discussed. (There is, ho