In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.
I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one’s dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.
So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.
I appreciate Duncan’s attempts to do that conversion and speak to the converted form of the argument.
But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428′s intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.
Ergo, request to all:
Do not feed trolls.
PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.
I’m the person who advocated most strongly for getting the downvote disabled, and I share some of 18239018038528017428′s skepticism about the community in the Bay Area, but I strongly agree with Val’s comment. There are already a ton of case studies on the internet in how fragile good conversational norms are. I’m going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.
(Also ditto everything Val said about not replying to 18239018038528017428)
I’m going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.
Thanks for that; I had already noticed this thread but a policy of reporting things is often helpful. It seemed like Duncan was handling himself well, and that leaving this up was better than censoring it. It seems easier for people to judge the screed fairly with the author’s original tone, and so just editing out the vitriol seems problematic.
With the new site, we expect to have mod tools that will be helpful here, like downvoting making this invisible-by-default, to ip-banning and other things to make creating a different throwaway account difficult.
For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)
Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter’s opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider—a behavior pattern that doesn’t need more practice, IMHO.
(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)
It’s true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the “hold off on proposing solutions” essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than “hold off on proposing solutions”.) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn’t clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through.
Here’s another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence “The world would be concretely better off if the author, and anyone like him, killed themselves.” Would you tell them to add it in or not? If not, I suspect there’s status quo bias, or something like it, in operation here.
Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well.
In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider—a behavior pattern that doesn’t need more practice, IMHO.
I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any other commenter in this thread, by my estimate).
My other comment should hopefully clarify things, as least with regard to politicization in particular.
To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people’s mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course, never use anything but social cognition in arguments; politics makes even “nerds” or “intellectuals” behave like typical humans.)
It is in fact possible for “heated” or even “nasty” discourse to be very information-rich; this makes sense if you realize that what counts as “nasty” depends on social norms. If you encounter discourse from a different social context (even, for example, simply because the speaker has misunderstood the social context and its norms!) you may read it as “nasty”, despite the fact that the author was specifically intending to communicate content.
Now, of course I don’t consider 18239018038528017428′s comment to be optimally worded—but then, I wouldn’t, because I didn’t write it. This is the important thing to understand: there is value to be had in getting detailed input on the mental states of people unlike oneself.
I agree that Duncan deserves positive reinforcement for engaging with this critic to the extent he did. But I think it was actually good for him epistemically to do so, not just as a demonstration of his willingness-to-bend-over-backwards, and thus, good social nature.
As someone who doesn’t live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I’m part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it’s not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it’s out in the open so I may face the full consequences of my mistakes.
I know lots of people mentioned in ’18239018038528017428′ comment. I either didn’t know those things about them, or I wouldn’t characterize what I did know in such terms. Based on their claims, ’18239018038528017428′ seems to have more intimate knowledge than I do, and I’d guess is also in or around the Bay Area rationality community as well. Yet they’re on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn’t been established other than “works at MIRI/CFAR”, and what they’re doing is just insulting and accusing regular people like the rest of us on the internet. They’re not facing the consequences of their actions.
The information provided isn’t primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of ’18239018038528017428′s comment were to express frustration, slander certain individuals, and undermine and discredit Duncan’s project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.
There’s nothing I do which is more policed in terms of tone on the basis of sensitivity that ’18239018038528017428′ isn’t doing. While we’re talking about norms of sensitivity, let’s talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn’t always splendid or sensitive, and how ’18239018038528017428′ do it, are what separates people who have a non-zero respect for norms, and those who don’t. This coming from me, a guy who lots of people think probably already flaunts social norms too much.
I am anti-sympathetic to ’18239018038528017428′ and whether they’re censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don’t like seeing this sort of drama dominate discourse, and in particular there are lots of us who don’t care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That’s not what anyone needs. Since we’ve established ’18239018038528017428′ seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user ‘18239018038528017428’ wouldn’t need to out themselves in front of everyone to do it. They could’ve had had a friend do it.
There are plenty of ways they could’ve accomplished everything they would’ve wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there’s no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton’s fence for discourse is being torn down here, I don’t believe that’s what’s going on here, and I think the preferences of everyone else on LessWrong who isn’t personally involved deserves a say on what they are and aren’t okay with being censored on this site.
You don’t seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428′s comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.
In particular, the speech is not being allowed “to the chagrin of all other users”. I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.
Needless to say, to be allowed is not to be approved.
By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like.
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
I notice you make a number of claims, but that of the ones I disagree with, none of them have “crux nature” for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn’t change my stance.
(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I’ll focus on offering you a pathway by which you could convince me.)
But if I dig a bit, I think I see a hint of a possible double crux. You say:
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information.
I agree with a steelman version of this. (I don’t think it is literally entirely distinct — but I also doubt you do, and I don’t want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply “…and that’s bad.” Whereas I would add instead “…and that’s good.”
In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that’s mostly okay. But when working out social dynamics (like, say, whether a person who’s proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.
At which point I cease caring about “efficient transmission of information”, basically because I think (a) the information being sent is secretly laced with social subtext that’ll affect future transmissions as well as its own perceived truthiness, and (b) the “efficient” transmission is emotionally harder to receive.
So to be succinct, I claim that:
(1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
(2) I am persuadable as per (1). It’s a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn’t preserve civility on Less Wrong.
(3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that’s a point where I am persuadable.
I’m gonna address these thoughts as they apply to this situation. Because you’ve publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won’t tell you you should kill yourself).
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
Did he tell people they should kill themselves?
This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
This can be a valuable skill and it can still be valuable to censor content-free vitriol.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
Yes, it takes a lot of effort to avoid telling people that they should kill themselves… Sorry, but I don’t really mind using the ability to keep that sort of thought to yourself as a filter.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
If we remove Chesterton’s Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.
Maybe it’d be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment’s information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan’s feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don’t defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.
See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!
For obvious reasons, it’s much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.
Because you’ve publicly expressed assent with extreme bluntness
Who said anything about “extreme”?
You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic’s comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the “Berkeley rationalist community”. (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author’s position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.
I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment’s proper place was in “comment score below threshold” minimization-land. But that’s about as far as I think the censorship needs to go.
Not, by the way, that I think it would be catastrophic if the comment were edited—in retrospect, I probably overstated the strength of my preference above—by my preference is, indeed, that it be left for readers to judge the author.
Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update—this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever “ammunition” my comment gave you.
I don’t wish to continue this argument, both because I have other priorities, and also because I don’t wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.
However, there is one further remark I must make:
Your comment makes you come across as someone who has led a very sheltered upper-class existence
You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)
Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models—perhaps even different ontologies—of the situation, informed by different sets of experiences and preoccupations.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
But people don’t choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, “from now on our goal is going to be X,” regardless what X is, unless it is already their goal. Thus a community that says, “our goal is truth,” does not automatically have the goal of truth, unless it is already their goal.
Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is “an internet forum concerned with truth-seeking,” nor is it helpful to talk about what LW is “supposed to be optimizing for.” It is doing what it is actually doing, not necessarily what people say it is doing.
That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that “Truthseeking tends to arise in violence-free environments,” is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
Is the implication that they’re not reasonable under the assumption that truth, too, trades off against other values?
What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.
Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It’s fairly true in my own case, however, which (you’ll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I’ve concluded) something I haven’t done nearly enough of. Hence this very discussion.
But people don’t choose goals.
This is obviously false, as a general statement. People choose goals all the time. They don’t, perhaps, choose their ultimate goals, but I’m not saying that truth-seeking is necessarily anybody’s ultimate goal. It’s just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.
Most people certainly care much more about not being attacked physically than discovering truth.
But not infinitely much. That’s why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I’m suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.
This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell’s implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here—though again, I’m much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)
The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of “civil discourse”.) The characteristic of fences is that they’re bright lines, clear demarcations, without any ambiguity as to which side you’re on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.
There are other points to consider, as well, that I haven’t even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.
I agree with all of this. (Except “this is obviously false,” but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)
Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like.
Yeah but exposure therapy doesn’t work like that though. If people are too sensitive, you can’t just rub their faces in the thing they’re sensitive about and expect them to change. In fact, what you’d want to desensitize people is the exact opposite—really tight conversation norms that still let people push slightly outside their comfort zone.
Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.
What goes through my mind here is, “Trolls spend a lot of time and energy making comments like this one too, and don’t stay silent when they could, so I’m not at all convinced that those points are more consistent with a world where they’re truth-seeking than they are with a world in which they’re just trolling.”
I still think that’s basically true. So to me those points seem irrelevant.
I think what I mean is something more like, “Unless and until I see enough evidence to convince me otherwise….” I’ll go back and edit for that correction.
PSA:
Do not feed trolls.
In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.
I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one’s dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.
So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.
I appreciate Duncan’s attempts to do that conversion and speak to the converted form of the argument.
But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428′s intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.
Ergo, request to all:
Do not feed trolls.
PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.
I’m the person who advocated most strongly for getting the downvote disabled, and I share some of 18239018038528017428′s skepticism about the community in the Bay Area, but I strongly agree with Val’s comment. There are already a ton of case studies on the internet in how fragile good conversational norms are. I’m going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.
(Also ditto everything Val said about not replying to 18239018038528017428)
Thanks for that; I had already noticed this thread but a policy of reporting things is often helpful. It seemed like Duncan was handling himself well, and that leaving this up was better than censoring it. It seems easier for people to judge the screed fairly with the author’s original tone, and so just editing out the vitriol seems problematic.
With the new site, we expect to have mod tools that will be helpful here, like downvoting making this invisible-by-default, to ip-banning and other things to make creating a different throwaway account difficult.
For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)
Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter’s opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider—a behavior pattern that doesn’t need more practice, IMHO.
(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)
I’m also curious to hear what made you update.
It’s true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the “hold off on proposing solutions” essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than “hold off on proposing solutions”.) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn’t clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through.
Here’s another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence “The world would be concretely better off if the author, and anyone like him, killed themselves.” Would you tell them to add it in or not? If not, I suspect there’s status quo bias, or something like it, in operation here.
Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well.
I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any other commenter in this thread, by my estimate).
My other comment should hopefully clarify things, as least with regard to politicization in particular.
To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people’s mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course, never use anything but social cognition in arguments; politics makes even “nerds” or “intellectuals” behave like typical humans.)
It is in fact possible for “heated” or even “nasty” discourse to be very information-rich; this makes sense if you realize that what counts as “nasty” depends on social norms. If you encounter discourse from a different social context (even, for example, simply because the speaker has misunderstood the social context and its norms!) you may read it as “nasty”, despite the fact that the author was specifically intending to communicate content.
Now, of course I don’t consider 18239018038528017428′s comment to be optimally worded—but then, I wouldn’t, because I didn’t write it. This is the important thing to understand: there is value to be had in getting detailed input on the mental states of people unlike oneself.
I agree that Duncan deserves positive reinforcement for engaging with this critic to the extent he did. But I think it was actually good for him epistemically to do so, not just as a demonstration of his willingness-to-bend-over-backwards, and thus, good social nature.
As someone who doesn’t live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I’m part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it’s not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it’s out in the open so I may face the full consequences of my mistakes.
I know lots of people mentioned in ’18239018038528017428′ comment. I either didn’t know those things about them, or I wouldn’t characterize what I did know in such terms. Based on their claims, ’18239018038528017428′ seems to have more intimate knowledge than I do, and I’d guess is also in or around the Bay Area rationality community as well. Yet they’re on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn’t been established other than “works at MIRI/CFAR”, and what they’re doing is just insulting and accusing regular people like the rest of us on the internet. They’re not facing the consequences of their actions.
The information provided isn’t primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of ’18239018038528017428′s comment were to express frustration, slander certain individuals, and undermine and discredit Duncan’s project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.
There’s nothing I do which is more policed in terms of tone on the basis of sensitivity that ’18239018038528017428′ isn’t doing. While we’re talking about norms of sensitivity, let’s talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn’t always splendid or sensitive, and how ’18239018038528017428′ do it, are what separates people who have a non-zero respect for norms, and those who don’t. This coming from me, a guy who lots of people think probably already flaunts social norms too much.
I am anti-sympathetic to ’18239018038528017428′ and whether they’re censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don’t like seeing this sort of drama dominate discourse, and in particular there are lots of us who don’t care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That’s not what anyone needs. Since we’ve established ’18239018038528017428′ seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user ‘18239018038528017428’ wouldn’t need to out themselves in front of everyone to do it. They could’ve had had a friend do it.
There are plenty of ways they could’ve accomplished everything they would’ve wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there’s no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton’s fence for discourse is being torn down here, I don’t believe that’s what’s going on here, and I think the preferences of everyone else on LessWrong who isn’t personally involved deserves a say on what they are and aren’t okay with being censored on this site.
You don’t seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428′s comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.
In particular, the speech is not being allowed “to the chagrin of all other users”. I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.
Needless to say, to be allowed is not to be approved.
What convinced you of this?
A constellation of related realizations.
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
Cool. Let’s play.
I notice you make a number of claims, but that of the ones I disagree with, none of them have “crux nature” for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn’t change my stance.
(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I’ll focus on offering you a pathway by which you could convince me.)
But if I dig a bit, I think I see a hint of a possible double crux. You say:
I agree with a steelman version of this. (I don’t think it is literally entirely distinct — but I also doubt you do, and I don’t want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply “…and that’s bad.” Whereas I would add instead “…and that’s good.”
In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that’s mostly okay. But when working out social dynamics (like, say, whether a person who’s proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.
At which point I cease caring about “efficient transmission of information”, basically because I think (a) the information being sent is secretly laced with social subtext that’ll affect future transmissions as well as its own perceived truthiness, and (b) the “efficient” transmission is emotionally harder to receive.
So to be succinct, I claim that:
(1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
(2) I am persuadable as per (1). It’s a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn’t preserve civility on Less Wrong.
(3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that’s a point where I am persuadable.
Your turn!
I’m gonna address these thoughts as they apply to this situation. Because you’ve publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won’t tell you you should kill yourself).
Did he tell people they should kill themselves?
This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.
Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.
This can be a valuable skill and it can still be valuable to censor content-free vitriol.
Yes, it takes a lot of effort to avoid telling people that they should kill themselves… Sorry, but I don’t really mind using the ability to keep that sort of thought to yourself as a filter.
If we remove Chesterton’s Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.
Maybe it’d be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment’s information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan’s feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don’t defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.
See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!
For obvious reasons, it’s much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.
Who said anything about “extreme”?
You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic’s comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the “Berkeley rationalist community”. (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author’s position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.
I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment’s proper place was in “comment score below threshold” minimization-land. But that’s about as far as I think the censorship needs to go.
Not, by the way, that I think it would be catastrophic if the comment were edited—in retrospect, I probably overstated the strength of my preference above—by my preference is, indeed, that it be left for readers to judge the author.
Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update—this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever “ammunition” my comment gave you.
I don’t wish to continue this argument, both because I have other priorities, and also because I don’t wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.
However, there is one further remark I must make:
You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)
Well, you’ve left me pretty confused about the level of importance you place on good-faith discussion norms :P
Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models—perhaps even different ontologies—of the situation, informed by different sets of experiences and preoccupations.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
But people don’t choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, “from now on our goal is going to be X,” regardless what X is, unless it is already their goal. Thus a community that says, “our goal is truth,” does not automatically have the goal of truth, unless it is already their goal.
Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is “an internet forum concerned with truth-seeking,” nor is it helpful to talk about what LW is “supposed to be optimizing for.” It is doing what it is actually doing, not necessarily what people say it is doing.
That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that “Truthseeking tends to arise in violence-free environments,” is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.
Is the implication that they’re not reasonable under the assumption that truth, too, trades off against other values?
What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.
Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It’s fairly true in my own case, however, which (you’ll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I’ve concluded) something I haven’t done nearly enough of. Hence this very discussion.
This is obviously false, as a general statement. People choose goals all the time. They don’t, perhaps, choose their ultimate goals, but I’m not saying that truth-seeking is necessarily anybody’s ultimate goal. It’s just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.
But not infinitely much. That’s why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I’m suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.
This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell’s implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here—though again, I’m much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)
The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of “civil discourse”.) The characteristic of fences is that they’re bright lines, clear demarcations, without any ambiguity as to which side you’re on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.
There are other points to consider, as well, that I haven’t even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.
I agree with all of this. (Except “this is obviously false,” but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)
Yeah but exposure therapy doesn’t work like that though. If people are too sensitive, you can’t just rub their faces in the thing they’re sensitive about and expect them to change. In fact, what you’d want to desensitize people is the exact opposite—really tight conversation norms that still let people push slightly outside their comfort zone.
I need access to these studies!
Out of curiosity, why do you prefer having downvotes disabled? (Here’s a comment explaining why I want them back.)
Evidence: time and energy put into the comment. Evidence: not staying silent when they could have.
I am not saying theee offending comments are valid, instead I am curious as to why you discounted what I identify as evidence?
Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.
What goes through my mind here is, “Trolls spend a lot of time and energy making comments like this one too, and don’t stay silent when they could, so I’m not at all convinced that those points are more consistent with a world where they’re truth-seeking than they are with a world in which they’re just trolling.”
I still think that’s basically true. So to me those points seem irrelevant.
I think what I mean is something more like, “Unless and until I see enough evidence to convince me otherwise….” I’ll go back and edit for that correction.
In what represents a considerable change of belief on my part, this now strikes me as very probably false.
I’m open. Clarify?
See this comment; most particularly, the final bullet point.
Replied.
I offer this model insofar as it helps with communicating about the puzzle -
http://bearlamp.com.au/a-model-of-arguments/
and this one
http://bearlamp.com.au/filter-on-the-way-in-filter-on-the-way-out/
see http://lesswrong.com/r/discussion/lw/p23/dragon_army_theory_charter_30min_read/dsyp