What I see as under threat is the ability to say in a way that’s actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction.
It feels important to me to protect this as well. I haven’t thought about this topic in depth though, I might fall in the camp who think there are “equivalent” tones which are better which you think are not equivalent, but it’s hard to say in the abstract.
Interestingly the readiest example I have at hand comes from Zack Davis. Over email, he suggested four sample edits to Drowning Children are Rare, claiming that this would say approximately the same thing with a much gentler tone. He suggested changing this:
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried that this would be an unfair way to save lives. Either scenario clearly implies that these estimates are severely distorted [...]
Either charities like the Gates Foundation and Good Ventures are accumulating funds that could be used to prevent millions of deaths, or the low cost-per-life-saved numbers are significantly overestimated. My former employer GiveWell in particular is notable here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell’s top charities; they were worried about “crowding out” other donors. Either scenario clearly implies that these estimates are systematically mistaken [...]
Some of these changes seemed fine to me (and subsequent edits reflect this), but one of them really does leave out quite a lot, and that kind of suggestion seems pretty typical of the kind of pressure I’m perceiving. I wonder if you can tell which one I mean and how you’d characterize the difference. If not, I’m happy to try explaining, but I figure I should at least check whether the inferential gap here is smaller than I though.
Registering a prediction that your objection was to shifting “problem here” to “notable here”
This example is really helpful for understanding your concerns thanks.
I agree, those are meaningfully and significantly different. Let’s see if I’m perceiving what you are:
Disclaimer: I didn’t read Drowning Children, but I do know of your concern that EA is self-recommending/organizations collecting resources and doing nothing with it.
1. Hoarding has the meaning of holding on to something for not other reason than holding on to it. To say they are hoarding money is say they’re holding onto the money just because they want to control it.
Accumulating funds could be innocuous. It makes me think of someone “saving up” before a planned expenditure, e.g. CFAR “accumulated funds” so they could buy a venue.
2. Exaggeration has the meaning to me of mistating the degree of something for personal benefit, and in this context definitely connotes it being an intentional/motivated misstatement.
Overestimated is a plausibly honest mistake one could make. Estimating can be hard—of course it’s suspicious when someone’s estimates systematically deviate in a direction advantageous to them.
3. Notable vs Problem here. Since you’re talking about an overall problem, each version still implies that you think Givewell is guilty of the behavior described, one just being slightly more direct.
4. Unfair way to save lies - if this is really what they said, then that’s an outlandish statement. If people are dying, why are you worried about fair?? What does it even mean for a way to be fair/unfair way of saving lives? Unfair to who? The people dying?
“Crowding other” other donors—I’m not sure what this means exactly. If the end result is those charities get fully funded and the people get saved and maybe even more money gets donated then otherwise—could plausibly be a good reason.
5. Severely distorted vs systematically mistaken—the first has more over connotations of an agent intentionally changing the estimates while it’s possible something could be systemic without being so deliberate. This is similar to exaggerate vs overestimate. They feel almost, though not quite, equivalent in terms of the accusation made.
In terms of significant meaning-altering, I think it goes 1, 4, 2, 5, 3.
I’m guessing the one you’re most concerned about is 1, but maybe 4.
You got it right, 1 was the suggested change I was most disappointed by, as the weakening of rhetorical force also took away a substantive claim that I actually meant to be making: that GiveWell wasn’t actually doing utilitarian-consequentialist reasoning about opportunity cost, but was instead displaying a sort of stereotyped accumulation behavior. (I began Effective Altruism is Self-Recommending with a cute story about a toddler to try to gesture at a similar “this is stereotyped behavior, not necessarily a clever conscious scheme,” but made other choices that interfered with that.)
4 turned out to be important too, since (as I later added a quote and link referencing) “unfairness” literally was a stated motivation for GiveWell—but Zack didn’t know that at the time, and the draft didn’t make that clear, so it was reasonable to suggest the change.
The other changes basically seemed innocuous.
For the record, I would advocate against any enforced norm that says you couldn’t use the first version. I would argue against anyone who thought you shouldn’t be able to say either of those in some public form generally or on LessWrong specifically. I would update negatively on GiveWell or anyone else who tried to complain about your statements because you used the first version and not something like the second.
I understand the fear, if not terror, at the idea of someone claiming you shouldn’t be able to do that. I expect I’d feel some of that myself if I thought someone was advocating it.
I can also see how some of my statements, especially in conversation with Zack, might have implied I had a different position here. I believe I do have an underlying coherent and consistent frame which discriminates between the cases and explains my different reactions, but I suspect it will take time and care to convey successfully.
I do think we (you + others in this convo + me) have some real disagreements though. I can say that I want to defend your ability to make those statements, but you might see some of my other positions as more dangerous to your ability to speak here than you think I realize/acknowledge. That could be, and I want to understand why things I’m worried about might be outweighed by these other things.
I think my basic worry is that if there’s not an active culture-setting drive against concern-trolling, then participating on this site will mean constant social pressure against this sort of thing. That means that if I try to do things like empathize with likely readers, take into account feedback, etc., I’ll either gradually become less clear in the direction this kind of concern trolling wants, or oppositionally pick fights to counteract that, or stop paying attention to LessWrong, or put up crude mental defenses that make me a little more obtuse in the direction of Said. Or some combination of those. I don’t think any of those are great options.
No one here since Eliezer seems to have had both the social power and the willingness to impose new—not quite standards, but social incentive gradients. The mod team has the power, I think.
Thanks for clarifying that you’re firmly in favor of at least tolerating this kind of speech. That is somewhat reassuring. But the culture is also determined by which things the mods are willing to ban for being wrong for the culture, and the implicit, connotative messages in the way you talk about conflicts as they come up. The generator of this kind of behavior is what I’m trying to have an argument with, as it seems to be to be basically embracing the drift towards pressure against the kind of clarity-creation that creates discomfort for people connected to conventional power and money. I recognize that asking for an active push in the other direction is a difficult request, but LessWrong’s mission is a difficult mission!
^ acknowledged, though I am curious what specific behaviors you have in mind by concern-trolling and whether you can point to any examples on LessWrong.
Reflecting on the conversations in thread, I’m thinking/remembering that my attention and your (plus others) attention were on different things: if I’m understanding correctly, most of your attention has been on discussions with a political element (money and power) , yet I have been focused on pretty much (in my mind) apolitical discussions which have little to do with money or power.
I would venture (though I am not sure), that the norms and moderation requirements/desiderata for those contexts are different and can be dealt with differently. That is, that when someone makes a fact post about exercise or productivity, or someone writes about something to do with their personal psychology, or even someone is conjecturing about society in general—these cases are all very different from when bad behavior is being pointed out, e.g. in Drowning Children.
I haven’t thought much about the latter case, it feels like such posts, while important, are an extreme minority on LessWrong. One in a hundred. The other ninety-nine are not very political at all, unless raw AI safety technical stuff is actually political. I feel much less concerned that there are social pressures pushing to censor views on those topics. I am more concerned that people overall have productive conversations they find on net enjoyable and worthwhile, and this leads me to want to state that it is, all else equal, virtuous to be more “pleasant and considerate” in one’s discussions; and all else equal, one ought to invest to keep the tone of discussions collaborate/cooperative/not-at-war, etc.
And the question is maybe I can’t actually think about these putatively “apolitical” discussions separately from discussions of more political significance. Maybe whatever norms/virtues we set in the former will dictate how conversations about the latter are allowed to proceed. We have to think about the policies for all types of discussions all at once. I could imagine that being true, though it’s not clear to me that it definitely is.
I’m curious what you think.
 At one point in the thread you said I’d missed the most important case, and I think this was relative to your focus.