Basics of Rationalist Discourse
This post is meant to be a linkable resource. Its core is a short list of guidelines (you can link directly to the list) that are intended to be fairly straightforward and uncontroversial, for the purpose of nurturing and strengthening a culture of clear thinking, clear communication, and collaborative truth-seeking.
“Alas,” said Dumbledore, “we all know that what should be, and what is, are two different things. Thank you for keeping this in mind.”
There is also (for those who want to read more than the simple list) substantial expansion/clarification of each specific guideline, along with justification for the overall philosophy behind the set.
Prelude: On Shorthand
Once someone has a deep, rich understanding of a complex topic, they are often able to refer to that topic with short, simple sentences that correctly convey the intended meaning to other people with similar context and expertise.
However, those same short, simple sentences are often dangerously misleading, in the hands of a novice who lacks the proper background. Dangerous precisely because they seem straightforward and comprehensible, and thus the novice will confidently extrapolate outward from them in what feel like perfectly reasonable ways, unaware the whole time that the concept in their head bears little or no resemblance to the concept that lives in the expert’s head.
Good shorthand in the hands of an experienced user need only be an accurate fit for the already-existing concept it refers to—it doesn’t need the additional property of being an unmistakeable non-fit for other nearby attractors. It doesn’t need to contain complexity or nuance—it just needs to remind the listener of the complexity already contained in their mental model. It’s doing its job if it efficiently evokes the understanding that already exists, independent of itself.
This is important, because what follows this introduction is a list of short, simple sentences comprising the basics of rationalist discourse. Each of those sentences is a solid fit for the more-complicated concept it’s gesturing at, provided you already understand that concept. The short sentences are mnemonics, reminders, hyperlinks.
They are not sufficient, on their own, to reliably cause a beginner to construct the proper concepts from the ground up, and they do not, by themselves, rule out all likely misunderstandings.
All things considered, it seems good to have a clear, concise list near the top of a post like this. People should not have to scroll and scroll and sift through thousands of words when trying to refer back to these guidelines.
But each of the short, simple sentences below admits of multiple interpretations, some of which are intended and others of which are not. They are compressions of complex points, and compressions are inevitably lossy. If a given guideline is new to you, check the in-depth explanation before reposing confidence in your understanding. And if a given guideline stated-in-brief seems to you to be flawed or misguided in some obvious way, check the expansion before spending a bunch of time marshaling objections that may well have already been answered.
Further musing on this concept: Sazen
Guidelines, in brief:
0. Expect good discourse to require energy.
Don’t say straightforwardly false things.
Track (for yourself) and distinguish (for others) your inferences from your observations.
Estimate (for yourself) and make clear (for others) your rough level of confidence in your assertions.
Make your claims clear, explicit, and falsifiable, or explicitly acknowledge that you aren’t doing so (or can’t).
Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth.
Don’t jump to conclusions—maintain at least two hypotheses consistent with the available information.
Be careful with extrapolation, interpretation, and summary/restatement—distinguish between what was actually said, and what it sounds like/what it implies/what you think it looks like in practice/what it’s tantamount to. If you believe that a statement A strongly implies B, and you are disagreeing with A because you disagree with B, explicitly note that “A strongly implies B” is a part of your model.
Allow people to restate, clarify, retract, and redraft their points, if they say that their first attempt failed to convey their intended meaning; do not hold people to the first or worst version of their claim.
Don’t weaponize equivocation/don’t abuse categories/don’t engage in motte-and-bailey shenanigans.
Hold yourself to the absolute highest standard when directly modeling or assessing others’ internal states, values, and thought processes.
What does it mean for something to be a “guideline”?
It is a thing that rationalists should try to do, to a substantially greater degree than random humans engaging in run-of-the-mill social interactions. It’s a place where it is usually correct and useful to put forth marginal intentional effort.
It is a domain in which rationalists should be open to requests. If a given comment lacks or is low on a particular guidelined virtue, and someone else pops in to politely ask for a restatement or a clarification which more clearly expresses that virtue, the first speaker should by default receive that request as a friendly and cooperative act, and respond accordingly (as opposed to receiving it as e.g. onerous, or presumptuous, or as a social attack).
It is an approximation of good rationalist discourse. If a median member of the general population were to practice abiding by it for a month, their thinking would become clearer and their communication would improve. But that doesn’t mean that perfect adherence is sufficient to make discourse good, and it doesn’t mean that breaking it is necessarily bad.
Think of the above, then, as a set of priors. If a guideline says “Do [X],” that is intended to convey that:
There will be better outcomes more frequently from people doing [X] than from people doing [a neutral absence of X], and similarly from [a neutral absence of X] than from [anti-X]. In particular, the difference in outcomes is large enough and reliable enough to be generally worth the effort even if [X] is not especially natural for you, or if [not X] or [anti-X] would be convenient.
Given a hundred instances of someone actively engaged in [anti-X], most of them will be motivated by something other than a desire to speak the truth, uncover the truth, or help others to understand the truth.
Thus, given the goals of clear thinking, clear communication, and collaborative truth-seeking, the burden of proof is on a given guideline violation to justify itself. There will be many cases in which violating a guideline will in fact be exactly the right call, just as the next marble drawn blindly from a bag of mostly red marbles may nevertheless be green. But if you’re doing something that’s actively contra to one of the above, it should be for a specific, known reason, that you should be willing to discuss if asked (assuming you didn’t already explain up front).
Which leads us to the Zeroth Guideline: expect good discourse to (sometimes) require energy.
If it did not—if good discourse were a natural consequence of people following ordinary incentives and doing what they do by default—then it wouldn’t be recognizable as the separate category of good discourse.
A culture of (unusually) clear thinking, (unusually) clear communication, and (unusually) collaborative truth-seeking is not the natural, default state of affairs. It’s endothermic, requiring a regular influx of attention and effort to keep it from degrading back into a state more typical of the rest of the internet.
This doesn’t mean that commentary must always be high effort. Nor does it mean that any individual user is on the hook for doing a hard thing at any given moment.
But it does mean that, in the moments where meeting the standards outlined above would take too much energy (as opposed to being locally unnecessary for some other, more fundamental reason), one should lean toward saying nothing, rather than actively eroding them.
Put another way: a frequent refrain is “well, if I have to put forth that much effort, I’ll never say anything at all,” to which the response is often “correct, thank you.”
It’s analogous to a customer complaining “if Costco is going to require masks, then I’m boycotting Costco.” All else being equal, it would be nice for customers to not have to wear masks, and all else being equal, it would be nice to lower the barrier to communication such that more thoughts could be more easily included.
But all else is not equal; there are large swaths of common human behavior that are corrosive or destructive to the collaborative search for truth. No single contributor or contribution is worth sacrificing the overall structures which allow for high-quality conversation in the first place—if one genuinely does not have the energy required to e.g. put forth one’s thoughts while avoiding straightforwardly false statements, or while distinguishing inference from observation (etc.), then one should simply disengage.
Note that there is always room for discussion on the meta level; it is not the case that there is universal consensus on every norm, nor on how each norm looks in practice (though the above list is trying pretty hard to limit itself to norms that are on firm footing).
Note also that there is a crucial distinction between [fake/performative verbal gymnastics], and [sincere prioritization of truth and accuracy]—more on this in Sapir-Whorf for Rationalists.
For most practical purposes, this is the end of the post. All remaining sections are reference material, meant to be dug into only when there’s a specific reason to; if you read further, please know that you are doing the equivalent of reading dictionary entries or encyclopedia entries and that the remaining words are not optimized for being Generically Entertaining To Consume.
Where did these come from?
I tinkered with drafts of this essay for over a year, trying to tease out something like an a priori list of good discourse norms, and wrestling with various imagined subsets of the LessWrong audience and trying to predict what objections might arise, and the whole thing was fairly sprawling and I ultimately scrapped it in favor of just making a list of a dozen in-my-estimation unusually good rationalist communicators, and then writing down the things that made those people’s discourse stand out to me in the first place, i.e. the things it seems to me that they do a) 10-1000x more frequently than genpop, and b) 2-10x more frequently than the median LessWrong user.
That list comprised:
Logan Brienne Strohl
Dan Keys (making it a baker’s dozen)
I claim that if you contrast the words produced by the above individuals with the words produced by the rest of the English-speaking population, what you find is approximately the above ten guidelines.
In other words, the guidelines are descriptive of good discourse that already exists; here I am attempting to convert them into prescriptions, with some wiggle room and some caveats. But they weren’t made up from whole cloth; they are in fact an observable part of What Actually Works In Practice.
Some of the above individuals have specific deficits in one or two places, perhaps, and there are some additional things that these individuals are doing which are not basic, and not found above. But overall, the above is a solid 80⁄20 on How To Talk Like Those People Do, and sliding in that direction is going to be good for most of us.
Why does this matter?
In short: because the little things add up. For more on this, take a look at Draining the Swamp as an excellent metaphor for how ambient hygiene influences overall health, or revisit Concentration of Force, in which I lay out my argument for why we should care about small deltas on second-to-second scales, or Moderating LessWrong, which is sort of a spiritual precursor to this post.
1. Don’t say straightforwardly false things.
… and be ready and willing to explicitly walk back unintentional falsehoods, if asked or if it seems like it would help your conversational partner.
“In reality, everyone’s morality is based on status games.” → “As far as I can tell, the overwhelming majority of people have a morality that grounds out in social status.”
In normal social contexts, where few people are attending to or attempting to express precise truth, it’s relatively costless to do things like:
Use hyperbole for emphasis
Say a false thing, because approximately everyone will be able to intuit the nearby true thing that you’re intending to convey
Over-generalize; ignore edge cases and rounding errors (e.g. “Everybody has eyes.”)
Most of the times that people end up saying straightforwardly false things, they are not intending to lie or deceive, but rather following one of these incentives (or similar).
However, if you are actively intending to create, support, and participate in a culture of clear thinking, clear communication, and collaborative truth-seeking, it becomes more important than usual to break out of those default patterns, as well as to pump against other sources of unintentional falsehood like the typical mind fallacy.
This becomes even more important when you consider that places like LessWrong are cultural crossroads—users come from a wide variety of cultures and cannot rely on other users sharing the same background assumptions or norms-of-speech. It’s necessary in such a multicultural environment to be slower, more careful, and more explicit, if one wants to avoid translation errors and illusions of transparency and various other traps and pitfalls.
Some ways you might feel when you’re about to break the First Guideline:
The thing you want to say is patently obvious or extremely simple
There’s no reason to beat around the bush
It’s really quite important that this point be heard above all the background noise
Some ways a First Guideline request might look:
“Hang on—did you mean that literally?”
“I’m not sure whether or not you’re exaggerating in the above claims, and want to double-check that you mean them straightforwardly.”
2. Track and distinguish your inferences from your observations.
… or be ready and willing to do so, if asked or if it seems like it would help your conversational partner (or the audience). i.e. build the habit of tracking the distinction between what something looks like, and what it definitely is.
“Keto works” → “I did keto and it worked.” → “I ate [amounts] of [foods] for [duration], and tracked whether or not I was in ketosis using [method]. During that time, I lost eight pounds while not changing anything about my exercise or sleep or whatever.”
“That’s propaganda.” → “That’s tripping my propaganda detectors.” → “That sentence contains [trait] and [trait] and [trait] which, as far as I can tell, are false/vacuous/just there to cause the reader to feel a particular way.”
“User buttface123 is a dirty liar.” → “I’ve caught user buttface123 lying three times now.” → “I’ve seen user buttface123 say false things in support of their point [three] [times] [now], and that last time was after they’d responded to a comment thread containing accurate info, so it wasn’t just simple ignorance. They’re doing it on purpose.”
The first and most fundamental question of rationality is “what do you think you know, and why do you think you know it?”
Many people struggle with this question. Many people are unaware of the processing that goes on in their brains, under the hood and in the blink of an eye. They see a fish, and gloss over the part where they saw various patches of shifting light and pattern-matched those patches to their preexisting concept of “fish.” Less trivially, they think that they straightforwardly observe things like:
Complex interventions in the world “working” or “not working”
The people around them “being nice” or “being assholes”
Particular pieces of food or art or music or architecture “just being good”
… and they miss the fact that they were running a bunch of direct sensory data through a series of filters and interpreters that brought all sorts of other knowledge and assumptions and past experience and causal models into play. The process is so easy and so habitual that they do not notice it is occurring at all.
(Where “they” is also meant to include “me” and “you,” at least some of the time.)
Practice the skill of slowing down, and zooming in. Practice asking yourself “why?” after the fashion of a curious toddler. Practice answering the question “okay, but if there were another step hiding in between these two, what would it be?” Practice noticing even extremely basic assumptions that seem like they never need to be stated, such as “Oh! Right. I see the disconnect—the reason I think X is worse than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad.”
This is particularly useful because different humans reason differently, and that reasoning tends to be fairly opaque, and attempting to work backward from [someone else’s outputs] to [the sort of inputs you would have needed, to output something similar] is a recipe for large misunderstandings.
Wherever possible, try to make explicit the causes of your beliefs, and to seek the causes underlying the beliefs of others, especially when you strongly disagree. Work on improving your ability to tease out what you observed separate from what you interpreted it to mean, so that the conversation can track (e.g. “I saw A,” “I think A implies B,” and “I don’t like B” as three separate objects. If you’re unable to do so, for instance because you do not yet know the source of your intuition, try to note out loud that that’s what’s happening.
Some ways you might feel when you’re about to break the Second Guideline:
Everybody knows that X implies Y; it’s obvious/trivial.
The implications of what was just said are alarming, and need to be responded to.
There’s just no other explanation that fits the available data.
Some ways a Second Guideline request might look:
“Wait—can you tell me why you believe that?”
“That doesn’t sound observable to me. Would you mind saying what you actually saw?”
“Are you saying that it seems like X, or that it definitely is X?”
3. Estimate and make clear your rough level of confidence in your assertions.
… or be ready and willing to do so, if asked or if it seems like it would help another user.
Humans are notoriously overconfident in their beliefs, and furthermore, most human societies reward people for visibly signaling confidence.
Humans, in general, are meaningfully influenced by confidence/emphasis alone, separate from truth—probably not literally all humans all of the time, but at least in expectation and in the aggregate, either for a given individual across repeated exposures or for groups of individuals (more on this in Overconfidence is Deceit).
Humans are social creatures who tend to be susceptible to things like halo effects, when not actively taking steps to defend against them, and who frequently delegate and defer and adopt others’ beliefs as their own tentative positions, pending investigation, especially if those others seem competent and confident and intelligent. If you expose 1000 randomly-selected humans to a debate between a quiet, reserved person outlining an objectively correct position and a confident, emphatic person insisting on an unfounded position, many in that audience will be net persuaded by the latter, and others will feel substantially more uncertainty and internal conflict than the plain facts of the matter would have left them feeling by default.
Thus, there is frequently an incentive to misrepresent your confidence, for instrumental advantage, at the cost of our collective ability to think clearly, communicate clearly, and engage in collaborative truth-seeking.
Additionally, there is a tendency among humans to use vague and ambiguous language that is equally compatible with multiple interpretations, such as the time that a group converged on agreement that there was “a very real chance” of a certain outcome, only to discover later, in one-on-one interviews, that at least one person meant that outcome was 20% likely, and at least one other meant it was 80% likely (which are exactly opposite claims, in that 20% likely means 80% unlikely).
Thus, it behooves people who want to engage in and encourage better discourse to be specific and explicit about their confidence (i.e. to use numbers, and to calibrate your use of numbers over time, or to flag tentative beliefs as tentative, or to be clear about the source of your belief and your credence in that source).
“That’ll never happen.” → “That seems really unlikely to me.” → “I think the outcome you just described is … I’m sort of making up numbers here but it feels like it’s less than ten percent likely?”
“I don’t care what Mark said; I know they sell them at that store.” → “Look, I’d bet you five to one that if we go there, we’ll find them on the shelf.”
“The number one predictor of mass violence is domestic violence.” → “I’m pretty sure I recall seeing an article stating that the number one predictor of mass violence is domestic violence, and I’m pretty sure it was in a news source I thought was reputable.” → “Here’s the study, and here’s the methodology, and here’s the data.”
Some ways you might feel when you’re about to break the Third Guideline:
What was just said was wrong; thankfully you’re here to set the record straight.
Everybody knows that nobody literally means “100% certain,” so it’s not really deceptive or misleading.
There’s no need to be super explicit; the person you’re talking to is on the same wavelength and almost certainly “gets it.”
Some ways a Third Guideline request might look:
“I’m curious if you would be willing to bet some small amount of dollars on this, and if so, at what odds?”
“Hey, that’s a pretty strong statement—do you actually mean that there are no exceptions?”
“If I told you I had proof you were wrong, how surprised would you be?”
4. Make your claims clear, explicit, and falsifiable, or explicitly acknowledge that you aren’t doing so (or can’t).
… or at least be ready and willing to do so, if asked or if it seems like it would help make things more comprehensible.
It is, in fact, actually fine to be unsure, or to have a vague intuition, or to make an assertion without being able to provide cruxes or demonstrate how it could be proven/disproven. None of these things are disallowed in rational discourse.
But noting aloud that you are self-aware about the incomplete nature of your argument is a highly valuable social maneuver. It signals to your conversational partner “I am aware that there are flaws in what I am saying; I will not take it personally if you point at them and talk about them; I am taking my own position as object rather than being subject to it and tunnel-visioned on it.”
(This is a move that makes no sense in an antagonistic, zero-sum context, since you’re just opening yourself up to attack. But in a culture of clear thinking, clear communication, and collaborative truth-seeking, contributing your incomplete fragment of information, along with signaling that yes, the fragment is, indeed, a fragment, can be super productive.)
Much as we might wish that everyone could take for granted that disagreement is prosocial and productive and not an attack, it is not actually the case. Some people do indeed launch attacks under the guise of disagreement; some people do indeed respond to disagreement as if it were an attack even if it is meant entirely cooperatively; some people, fearing such a reaction, will be hesitant to note their disagreement in the first place, especially if their conversational partner doesn’t seem open to it.
“Look, just trust me, that whole group is bad news.” → “I had a bad experience with that group, and I know three other people who’ve each independently told me that they had bad experiences, too.”
“[Nation X] is worse than [Nation Y].” → “I’m willing to bet that if we each independently made lists of what measurable stats makes a nation good, and then checked, [Nation X] would be worse on at least 60% of them.”
“This is an outstanding investment.” → “Look, I can’t actually quite put my finger on what it is about this investment that stands out to me; I’m sort of running off an opaque intuition here. But I can at least say that I feel really confident about it—confident enough that I put in half my paycheck from last month. For calibration, the last time I felt this confident, I did indeed see a return of 300% in six months.”
The more clear it is what, exactly, you’re trying to say, the easier it is for other people to evaluate those claims, or to bring other information that’s relevant to the issue at hand.
The more your assertions manage to be checkable, the easier it is for others to trust that you’re not simply throwing spaghetti at the wall to see what sticks.
And the more you’re willing to flag your own commentary when it fails on either of the above, the easier it is to contribute to and strengthen norms of good discourse even with what would otherwise be a counterexample. Pointing out “this isn’t great, but it’s the best that I’ve got” lets you contribute what you do have, without undermining the common standard of adequacy.
Some ways you might feel when you’re about to break the Fourth Guideline:
It would be scary, or otherwise somehow bad, if you were to turn out to be mistaken about X.
There’s too much going on; you have a pile of little intuitions that all add up in a way that is too tricky to try tracking or explaining.
If you don’t make it sound like you know what you’re talking about, people might wrongly dismiss your true and valuable information/you don’t want to get unfairly docked just because you can’t shape your jargon to match the local lingo.
Some ways a Fourth Guideline request might look:
“If for some reason this turned out to be false, how would we know? What sorts of things would we see in the world where something else is going on?”
“I’m not sure I quite understand what you’re predicting, here. Can you list, like, three things you’re claiming I will unambiguously see over the next month?”
“Hey, it sounds like you don’t actually have legible cruxes. Is that correct?”
5. Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth.
… and be ready to falsify your impression otherwise, if evidence starts to pile up.
The goal of rationalist discourse is to be less wrong—for each of us as individuals and all of us as a group to have more correct beliefs, and fewer incorrect beliefs.
If two people disagree, it’s tempting for them to attempt to converge with each other, but in fact the right move is for both of them to try to see more of what’s true.
If you are moving closer to truth—if you are seeking available information and updating on it to the best of your ability—then you will inevitably eventually move closer and closer to agreement with all the other agents who are also seeking truth.
However, when conversations get heated—when the stakes are high—when the other person not only appears to be wrong but also to be acting in poor faith—that’s when it’s the most valuable to keep in touch with the possibility that you might be misunderstanding each other, or that the problem might be in your models, or that there might be some simple cultural or norms mismatch, or that your conversational partner might simply be locally failing to live up to standards that they do, in fact, generally hold dear, etc.
It’s very easy to observe another person’s output, evaluate it according to your own norms and standards, and conclude that you understand their motives and that those motives are bad.
It is not, in fact, the case that everyone you engage with is primarily motivated by truth-seeking! Even in enclaves like LessWrong, there are lots of people who are prioritizing other goals over that one a substantial chunk of the time.
But simple misunderstandings, and small, forgivable, recoverable slips in mood or mental discipline outnumber genuine bad faith by a large amount. If you are running a tit-for-tat algorithm in which you quickly respond to poor behavior by mirroring it back, you will frequently escalate a bad situation (and often appear, to the other person, like the first one who broke cooperation).
Another way to think of this is: it pays to give people two extra chances to demonstrate that they are present in good faith and genuinely trying to cooperate, because if they aren’t, they’ll usually prove it soon enough anyway. You don’t have to turn the other cheek repeatedly, but doing so once or twice more than you would by default goes a long way toward protecting against false positives on your bad-faith detector.
“You’re clearly here in bad faith.” → “In the past three comments, you said [thing], [thing], and [thing], all of which are false, and all of which it seems to me you must know are false; you’re clearly here in bad faith.” → “Listen, as I look back over what’s already been said, I’m seeing a lot of stuff that really sets off my bad-faith detectors (such as [thing]). Can we try slowing down, or maybe starting over? Like, I’d have an easier time dropping back down from red alert if you engaged with my previous comment that you totally ignored, or if you were at least willing to give me some of your reasons for believing [thing].”
This behavior can be modeled, as well—the quickest way to get a half-derailed conversation back on track is to start sharing pairs of [what you believe] and [why you believe it]. To demonstrate to your conversational partner that those two things go together, and show them the kind of conversation you want to have.
(This is especially useful on the meta level—if you are frustrated, it’s much better to say “I’m seeing X, and interpreting it as meaning Y, and feeling frustrated about that!” than to just say “you’re being infuriating.”)
You could think of the conversational environment as one in which defection strategies are rampant, and many would-be cooperators have been trained and traumatized into hair-trigger defection by repeated sad experience.
Taking that fact into account, it’s worth asking “okay, how could I behave in such a way as to invite would-be cooperators who are hair-trigger defecting back into a cooperative mode? How could I demonstrate to them, via my own behavior, that it’s actually correct to treat me as a collaborative truth-seeker, and not as someone who will stab them as soon as I have the slightest pretext for writing them off?”
Some ways you might feel when you’re about to break the Fifth Guideline:
It’s more important to settle this one than to get all of the little fiddly details right.
There’s no way they could have been unaware of the implications of what they said.
I’m going to write X, and if they respond with Y then I’ll know they’re here in bad faith. (The giveaway here being the desire to see them fail the test, versus a more dispassionate poking at various possibilities.)
Some ways a Fifth Guideline request might look:
“Hey, sorry for the weirdly blunt request, but: I get the sense that you’re not treating me as a cooperative partner in this conversation. Is, uh. Is that true?”
“I’m finding it pretty hard to stay in this back-and-forth. Can you maybe pause and look back through what I’ve written and note anything you agree with? I’ll do the same, e.g. you said X and I do think that’s a piece of this puzzle.”
“What’s your goal in this conversation?”
6. Don’t jump to conclusions—maintain at least two hypotheses consistent with the available information.
… or be ready and willing to generate a real alternative to your main hypothesis, if asked or if it seems like it would help another user.
“You’re strawmanning me.” → “It really seems like you’re strawmanning me.” → “I can’t tell whether you’re strawmanning me or whether there’s some kind of communication breakdown.” → “I can’t tell whether you’re strawmanning me or whether there’s some kind of communication breakdown; my best guess is that you think that [the phrase I wrote] means [some other thing].”
There exists a full essay on this concept titled Split and Commit. The short version is that there is a large difference between a person who has a single theory (which they are nominally willing to concede might be false), and a person who has two fully distinct possible explanations for their observations, and is looking for evidence to distinguish between them.
Another way to point at this distinction is to remember that bets are different from beliefs.
Most of the time, you are forced to make some sort of implicit bet. For instance, you have to choose how to respond to your conversational partner, and responding-to-them-as-if-they-were-sincere is a different “bet” than responding-to-them-as-if-they-are-insincere.
And because people are often converting their beliefs into bets, and because bets are often effectively binary, they often lose track of the more complicated thing that preceded the rounding-off.
If a bag of 100 marbles contains 70 red ones and 30 green ones, the best bet for the first string of ten marbles out of the bag is RRRRRRRRRR. Any attempt to sprinkle some Gs into your prediction is more likely to be wrong than right, since any single position is 70% likely to contain an R.
(There’s less than a 3% chance of the string being RRRRRRRRRR, but the odds of any other specific string are even worse.)
But it would be silly to say that you believe that the next ten marbles out of the bag will all be red. If forced, you will predict RRRRRRRRRR, because that’s the least wrong prediction, but actually (hopefully) your belief is “for each marble, it’s more likely to be red than green but it could pretty easily be green.”
In similar fashion, when you witness someone’s behavior, and your best bet is “this person is biased or has an unstated agenda,” your belief should ideally be something like “this behavior is most easily explained by an unstated agenda, but if I magically knew for a fact that that wasn’t what was happening, the next most likely explanation would be ______________.”
That extra step—of pausing to consider what else might explain your observations, besides your primary theory—is one that is extremely useful, and worth practicing until it becomes routine. People who do not have this reflex tend to fall into many more pits/blindspots, and to have a much harder time bridging inferential gaps, especially with those they do not already agree with.
Some ways you might feel when you’re about to break the Sixth Guideline:
You’ve seen this before; you know exactly what this is.
X piece of evidence will be sufficient to prove or disprove your hypothesis.
It’s really important that you respond to what’s happening; the stakes are high and inaction would be problematic.
Some ways a Sixth Guideline request might look:
“Do you think that’s the only explanation for these observations?”
“It sounds like you’re trying to evaluate whether X is true or false. What’s your next best theory if it turns out to be false?”
“You’re saying that A implies B. How often would you say that’s true? Like, is A literally tantamount to B, or does A just lead to B 51% of the time, or … ?”
7. Be careful with extrapolation, interpretation, and summary/restatement.
Distinguish between what was actually said and what it sounds like/what it implies/what you think it looks like in practice/what it’s tantamount to, especially if another user asks you to pay more attention to this distinction than you were doing by default. If you believe that a statement A strongly implies B, and you are disagreeing with A because you disagree with B, explicitly note that “A strongly implies B” is a part of your model. Be willing to do these things on request if another person asks you to, or if you notice that it will help move the conversation in a healthier direction.
Another way to put this guideline is “don’t strawman,” but it’s important to note that, from the inside, strawmanning doesn’t typically feel like strawmanning.
“Strawmanning” is a term for situations in which:
Person A has a point or position that they are trying to express or argue for
Person B misrepresents that position as being some weaker or more extreme position
Person B then attacks, disparages, or disproves the worse version (which is presumably easier than addressing Person A’s true argument)
Person B constructs a strawman, in other words, just so they can then knock it down.
There’s a problem with the definition above; readers are invited to pause and see if they can catch it.
If you’d like a hint: it’s in the last line (the one beginning with “Person B constructs a strawman”).
The problem is in the last clause.
“Just so they can knock it down” presupposes purpose. Not only is Person B engaging in misrepresentation, they’re doing it in order to have some particular effect on the larger conflict, presumably in the eyes of an audience (since knocking over a strawman won’t do much to influence Person A).
It’s a conjunction of act and intent, implying that the vast majority of people engaging in strawmanning are doing so consciously, strategically, and in a knowingly disingenuous fashion—or, if they’re not fully self-aware about it, they’re nevertheless subconsciously optimizing for making Person A’s position appear sillier or flimsier than it actually is.
This does not match how the term is used, out in the wild; it would be difficult to believe that even twenty percent of my own encounters with others using the term (let alone a majority, let alone all of them) are downstream of someone being purposefully deceptive. Instead, the strawmanning usually seems to be “genuine,” in that the other person really thinks that the position being argued actually is that dumb/bad/extreme.
It’s an artifact of blind spots and color blindness; of people being unable-in-practice to distinguish B from A, and therefore thinking that A is B, and not realizing that “A implies B” is a step that they’ve taken inside their heads. Different people find different implications to be more or less “obvious,” given their own cultural background and unique experiences, and it’s easy to typical-mind that the other relevant people in the conversation have approximately the same causal models/context/knowledge/anticipations.
If it’s just patently obvious to you that A strongly implies B, and someone else says A, it’s very easy to assume that everyone else made the leap to B right along with you, and that the author intended that leap as well (or intended to hide it behind the technicality of not having actually come out and said it). It may feel extraneous or trivial, in the moment, to make that inference explicit—you can just push back on B, right?
Indeed, if the leap from A to B feels obvious enough, you may literally not even notice that you’re making it. From the inside, a blindspot doesn’t feel like a blindspot—you may have cup-stacked your way straight from A to B so quickly and effortlessly that your internal experience was that of hearing them say B, meaning that you will feel bewildered yourself when what seems to you to be a perfectly on-topic reply is responded to as though it were an adversarial non-sequitur.
(Which makes you feel as if they broke cooperation first; see the sixth guideline.)
People do, in fact, intend to imply things with their statements. People’s sentences are not contextless objects of unambiguous meanings. It’s entirely fine to hazard a guess as to someone’s intended implications, or to talk about what most people would interpret a given sentence to mean, or to state that [what they wrote] landed with you as meaning [something else]. The point is not to pretend that all communication is clear and explicit; it’s to stay in contact with the inherent uncertainty in our reconstructions and extrapolations.
“What this looks like, in practice” or “what most people mean by statements of this form” are conversations that are often skipped over, in which unanimous consensus is (erroneously) taken for granted, to everyone’s detriment. A culture that seeks to promote clear thinking, clear communication, and collaborative truth-seeking benefits from a high percentage of people who are willing to slow down and make each step explicit, thereby figuring out where exactly shared understanding broke down.
Some ways you might feel when you’re about to break the Seventh Guideline:
Outraged, offended, insulted, or attacked.
Irritated at the other person’s sneakiness or disingenuousness.
Like you need to defend against a motte-and-bailey.
Some ways a Seventh Guideline request might look:
“That’s not what I wrote, though. Can you please engage with what I wrote?”
“Er, you seem to be putting a lot of words in my mouth.”
“I feel like I’m being asked to defend a position I haven’t taken. Can you point at what I said that made you think I think X?”
8. Allow people to restate, clarify, retract, and redraft their points.
Communication is difficult. Good communication is often quite difficult.
One of the simplest interventions for improving discourse is to allow people to try again.
Sometimes our first drafts are clumsy in their own right—we spoke too soon, or didn’t think things through deeply enough.
Other times, we said words which would have caused a clone of ourselves to understand, but we failed to account for some crucial cultural difference or inferential gap with our non-clone audience, and our words caused them to construct a meaning that was very different than the meaning we intended.
Also, sometimes we’re just wrong!
It’s quite common, on the broader internet and in difficult in-person conversations, for people’s early rough-draft attempts to convey a thought to haunt them. People will relentlessly harp on some early, clumsy phrasing, or act as if some point with unpleasant ramifications (which the speaker failed to consider) intended those ramifications.
What this results in is a chilling effect on speech (since you feel like you have to get everything right on your first try or face social punishment) and a disincentive for making updates and corrections (since those corrections will often simply be ignored and you’ll be punished anyway as if you never made them, so why bother).
Part of the solution is to establish a culture of being forgiving of imperfect first drafts (and generous/light-touch in your interpretation of them), and of being open to walkbacks or restatements or clarifications.
It’s perfectly acceptable to say something like “This sounds crazy/abhorrent/wrong to me,” or to note that what they wrote seems to you to imply some statement B that is bad in some way.
It’s also perfectly reasonable to ask that people demonstrate that they see what was wrong with their first draft, rather than just being able to say “no, I meant something subtly different” ad infinitum.
But if your conversational partner replies with “oh, gosh, sorry, no, that is not what I’m trying to say,” it’s generally best to take that assertion at face value, and let them start over. As with the sixth guideline, this means that you will indeed sometimes be giving extra leeway to people who are actually being irrational/unreasonable/bad/wrong, but most of the time, it means that you will be avoiding the failure mode of immediately leaping to a conclusion about what the other person meant and then refusing to relinquish that assumption.
The claim is that the costs of letting a few more people “get away with it” a little longer is better than curtailing the whole population’s ability to think out loud and update on the fly.
Some ways you might feel when you’re about to break the Eighth Guideline:
They really, really shouldn’t have said that thing; they really should have known better.
You can tell what they really meant, and now they’re just backpedaling.
Damage was done, and merely saying “I didn’t mean it” doesn’t undo the damage.
Some ways an Eighth Guideline request might look:
“Oh, that word/phrase means something different to you than it does to me. Let me try again with different words, because the thing you heard is not the thing I was trying to say.”
“I hear that you have a pretty strong objection to what I said. I’m wondering if I could start over entirely, rather than saying a new thing and having you assume that what I meant is in between the two versions of what I’ve said.”
“Can you try passing my ITT, so that I can see where I’ve miscommunicated?”
9. Don’t weaponize equivocation/abuse categories/engage in motte-and-bailey shenanigans.
...and be open to eschewing/tabooing broad summary words and talking more about the details of your model, if another user asks for it or if you suspect it would lower the overall confusion in a given interaction.
Labels are great.
However, labels are a tool with some known failure modes. When someone uses a conceptual handle like “marriage,” “genocide,” “fallacy of the grey,” or “racist,” they are staking a claim about the relationship between a specific instance of [a thing in reality], and a cluster of [other things] that all share some similar traits.
That leads to some fairly predictable misunderstandings.
For instance, someone might notice that a situation has (e.g.) three out of seven salient markers of gaslighting (in their own personal understanding of gaslighting).
Three out of seven is a lot, when most things have zero out of seven! So it’s reasonable for them to bring in the conceptual handle “gaslighting” as they begin to reason about and talk about the situation—to port in the intuitions and strategies that are generally useful for things in the category.
But it’s very easy for people to fail to make clear that they’re using the term “gaslighting” because it had specific markers X, Y, and Z, and that the situation doesn’t seem to have markers T, U, V, or W at all, let alone considerations of whether or not their own idiosyncratic seven markers sync up with consensus understanding of gaslighting.
And thus the application of the term can easily cause other observers to implicitly conclude that all of T, U, V, W, X, Y, and Z are nonzero involved (and possibly also Q, R, and S that various other people bring to the table without realizing that they are non-universal).
Done intentionally, we call this weaponized equivocation or motte-and-bailey, i.e. “I can make the term gaslighting stick in a technically justified sense, and then abuse the connotation to make everybody think that you were doing all of the bad things involved in gaslighting on purpose and that you are a gaslighter, with all that entails.”
But it also happens by accident, quite a lot. A conceptual handle makes sense to Person A, so they use it, and Person B both loses track of nuance and also injects additional preconceptions, based on their understanding of the conceptual handle.
The general prescription is to use categories and conceptual handles as a starting point, and then carefully check one’s understanding.
“This is just the concept of lossy compression.” → “This is making me think of lossy compression; is there anything here that’s not already covered by that concept?” → “What I’m hearing is A, B, C, and D, which happen to be exactly the same markers I have for ‘lossy compression’. Are you in fact saying A, B, C, and D? And are you in fact not saying anything else?”
Another way to think of this prescription is to recognize that the use of categories and conceptual handles is warping, in the sense that categories and conceptual handles are often like gravitational attractors pulling people’s models toward a baseline archetype or stereotype. They tend to loom large, and obscure away detail, and generate a kind of top-down smoothing consensus or simplification.
That’s super useful when the alternative is having them be lost out in deep space, but it’s also not as good as using the category to get them in the right general vicinity and then deliberately not leaning on the category once they’re close enough that you can talk about all of the relevant specifics in detail.
Some ways you might feel when you’re about to break the Ninth Guideline:
The way in which the thing under discussion is an instance of X is the most important factor, dwarfing all other considerations.
The unique or non-typical aspects of the thing are obvious and go without saying.
Everybody knows what X means.
Some ways a Ninth Guideline request might look:
“Hang on, you used a category word that covers a lot of ground. Can you name, like, one or two other instances of X that are roughly on par? I currently don’t know if you mean bad-like-sunburns or bad-like-cancer.”
“What’s the value of agreeing on this being an X? Like, you’re bidding for this label to be attached … what comes out of that, if we all end up agreeing?”
“If I were to say that this isn’t an X, it’s actually a Y, what would you say to that?”
10. Hold yourself to the absolute highest standard when directly modeling or assessing others’ internal states, values, and thought processes.
“You’re obviously crazy.” → “This seems crazy to me.” → “I’m having a hard time making this make sense, and I’m seriously considering the possibility that it just doesn’t make sense, and you’re confused/crazy.” → “This really sounds to me like it’s more likely to come from some disorganized or broken thought process than something that’s grounded in reality. I apologize for that; I know the previous sentence is more than a little rude. I would have much less weight on that hypothesis if you could [pass some kind of concrete test I propose that would demonstrate that you’re not incapable of reason in this domain].”
Of the ten guidelines, this is the one which is the least about epistemic hygiene, and the most about social dynamics.
(It’s not zero about epistemic hygiene, but it deserves extra emphasis for pragmatic reasons rather than philosophical ones.)
If you believe that someone is being disingenuous or crazy or is in the grips of a blindspot—if you believe that you know, better than they know themselves, what’s going on in their head (or perhaps that they are lying about what’s going on in their head)—then it is important to be extra cautious and principled about how you go about discussing this fact.
This is important because it’s very easy for people to (reasonably) feel attacked or threatened or delegitimized when others are making bold or judgment-laden assertions about the internal contents of their mind/soul/values, and it’s very hard for conversation to continue to be productive when one of the central participants is partially or fully focused on defending themselves from perceived social attack.
It is actually the case that people are sometimes crazy. It is actually the case that people are sometimes lying. It is actually the case that people are sometimes mistaken about the contents of their own minds, and that other people, on the outside, can see this more clearly. A blanket ban on hypotheses-about-others’-internals would be crippling to anyone trying to see clearly and understand the world; these things should, indeed, be thinkable and discussible, the fact that they are “rude” notwithstanding.
But by making those hypotheses a part of an open conversation, you’re adding a great deal of social and emotional strain to the already-difficult task of collaborative truth-seeking with a plausibly-compromised partner. In many milieus, the airing of such a hypothesis is an attack; there are not a lot of places where “you might be crazy” or “I know more than you about how your mind works” is a neutral or prosocial move. If the situation is such that it feels genuinely crucial for you to raise such a hypothesis out loud, then it should also be worth correspondingly greater effort and care.
(See the zeroth guideline.)
Some simple actions that tend to make this sort of thing go less reliably badly:
Take the social hit onto your own shoulders. Openly and explicitly acknowledge that you are, in fact, making assertions about the interior of another person’s mind; openly and explicitly acknowledge that this is, in fact, nonzero rude and carries with it nonzero social threat. Doing this gives the other person more space to be visibly shaken or upset without creating the appearance of proving your point; it helps defuse the threat vector by which one person provokes another into appearing unreasonable or hysterical and thereby delegitimizes them.
State the reasons for your belief. Don’t just assert that you think this is true; include quotes and references that show what led you to generate the hypothesis. This grounds your assertions in reality rather than in your own personal assessment, allowing others to retrace and affirm/reject your own reasoning.
Give the other person an out. Try to state some things that would cause you to conclude that they are not compromised in the way you fear they are (and do your best to make this a fair and reasonable test rather than a token impossibility). Imagine the world in which you are straightforwardly mistaken, and ask yourself how you would distinguish that world from the world you think that you’re in.
For more on this, see [link to a future essay that is hopefully coming from either Ray Arnold or myself].
Some ways you might feel when you’re about to break the Tenth Guideline:
This person’s conduct is clear; there’s only one possible interpretation.
This person is threatening norms and standards that are super important for making any further conversation productive.
It’s important that the audience understand why they need to stop listening to this person immediately.
Some ways a Tenth Guideline request might look:
“Please stop making assertions about the contents of my mind; you are not inside my head.”
“Do you have any alternative explanations for why a person might take the position I’m taking, that don’t involve being badwrong dumbcrazy?”
“It feels like you’re setting up a fully general argument against literally anything else I might say. What, uh. What do you think you know and why do you think you know it?”
(These requests deliberately written to appear somewhat triggered/hostile because that’s the usual tone by the point such a request needs to be made, and a little bit of leeway on behalf of the beleaguered seems appropriate.)
Appendix: Miscellaneous Thoughts
This post was long, and was written over the course of many, many months. Below are some scattered, contextless snippets of thought that ended up not having a home in any of the sections above.
Some general red flags for poor discourse:
Things are obvious, and the people who are not getting the obvious things are starting to get on your nerves.
You just can’t wait to hit “submit.” Words are effortlessly tumbling forth from your fingertips.
You are exhausted, but also can’t afford not to respond.
You feel angry/hurt/filled with righteous indignation.
Your conversational partner has just clearly demonstrated that they’re not there in good faith.
Some sketchy conversational movements that don’t fall neatly into the above:
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Treating a 70% probability of innocence and a 30% probability of guilt as a 100% chance that the person is 30% guilty (i.e. kinda guilty).
Pretending that your comment is speaking directly to a specific person while secretly spending the majority of your attention and optimization power on playing to some imagined larger audience.
Generating interventions that will make you feel better, regardless of whether or not they’ll solve the problem (and regardless of whether or not there even is a real problem to be solved, versus an ungrounded anxiety/imaginary injury).
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth.
A skill not mentioned elsewhere in this post: the meta-skill of being willing to recognize, own up to, apologize for, and correct failings in any of the above, rather than hiding one’s shame or doubling down or otherwise acting as if the problem is the mistake being seen rather than the mistake being made.
Appendix: Sabien’s Sins
The following is something of a precursor to the above list of basics; it was not intended to be as complete or foundational as the ten presented here but was more surgically targeting some of the most frustrating deltas between this subculture’s revealed preferences and my own endorsed standards. It was posted on Facebook several years ago; I include it here mostly as a historical curiosity.
We continue to creep closer to actually sufficient discourse norms, as a culture, mostly via a sort of stepwise mutual disarmament. Modern Western discourse norms (e.g. “don’t use ad hominem attacks”) got us something like seventy percent of the way there. Rationalist norms (e.g. “there’s no such thing as 100% sure”) got us maybe seventy percent of the remaining distance. The integration of Circling/NVC/Focusing/Belief Reporting frames (e.g. “I’m noticing that I have a story about you”) got us seventy percentish yet again.
This is an attempt to make another seventy percent patch. It is intended to build upon the previous norms, not to replace them. It isn’t perfect or comprehensive by any means—it’s derived entirely from my own personal frustrations during the past three or four years of online interactions—but it closes a significant number of the remaining socially-tolerated loopholes that allow even self-identified rationalists to “win” based on factors other than discernible, defensible truth. Some of these “sins” are subsets of others, but each occurs often enough in my experience to merit its own standalone callout.
By taking the pledge, you commit to:
Following the below discourse norms to the best of your ability, especially with other signatories, and noting explicitly where you are deliberately departing from them.
Being open to feedback that you are not following these norms, and taking such feedback as aid in adhering to your own values rather than as an attempt to impose values from without.
Doing your best to vocally, visibly, and actively support others who are following these norms, rather than leaving those people to fend for themselves against their interlocutors.
By taking the pledge, you do not commit to:
Endless tolerance of sealioning, whataboutery, rules-lawyering, or other adversarial or disingenuous tactics that attempt to leverage the letter of the law in violation of its spirit.
Adhering to the norms in contexts where no one else is, and where doing so therefore puts you at a critical disadvantage to no real benefit. A peace treaty is not a suicide pact.
Until I state otherwise, I hereby pledge that I shall refrain from using the following tactics in discourse:
Asserting the unfounded. I will not overstate my claims. I will exercise my right to form hypotheses and guesses, and to express those both with and without justification, but I will actively disambiguate between “I think or predict X” and “X is true.” I will keep my own limited perspective and experience in mind when making generalizations, and avoid universal statements unless I genuinely mean them.
Overlooking the inconvenient. I will not focus only on the places where my argument is strongest. I will not ghost from interactions in which I am losing the debate. I will do my best to respond to every point my interlocutors raise, or to explicitly acknowledge a refusal to do so (e.g. “I’m not going to discuss your third or fourth points.”), rather than quietly steering the conversation in another direction. I will acknowledge and explicitly endorse the parts of my interlocutor’s argument that seem true and correct. I will not adversarially summarize my interlocutor. When wrong, I will correct my error and admit fault, and will strive to own and propagate my new position at least as effectively as I asserted, defended, and propagated the original claim.
Eliding the investigation. I will not equivocate between priors and posteriors. I will strive to maintain the distinction between what things look like or are likely to be, and what we know with confidence that they are. I will preserve the difference between expecting a marble to be red because it came from a bag labeled “red marbles,” and claiming that a marble must be red because of the bag it came from. I will not assign, to a given member of a set, responsibility for every characteristic of that set. I will honor and support both sensible priors and defensible posteriors, and remember to reason about each separately.
Indulging in presumption. I will not make authoritative claims about the contents of others’ thoughts, intentions, or experiences without explicit justification. I remain free to state my suspicions and to make predictions and to form hypotheses, and to support those suspicions/predictions/hypotheses with fact and argument, but I will keep the extreme variability of human experience in mind when attempting to deduce others’ inner workings from their visible outputs, and I will remember not to discount what others have to say about themselves without strong and compelling reason. I will distinguish “these behaviors often correlate with these beliefs” from “you exhibited these behaviors with these results, therefore you must believe these things.” I will keep even my very high confidence hypotheses falsifiable. I will avoid sneers, insinuations, subtle put-downs, and all other manipulations-of-reputation that score points by painting someone else into a corner.
Hiding in the grey. I will not engage in motte-and-bailey shenanigans. I will not “play innocent” or otherwise pretend obliviousness to the obvious meanings or ramifications of my words. I will attend carefully to context, and will be deliberate and explicit about my choice between contextualizing and decoupling norms, and not mischaracterize those who prefer the opposite of my own preference. I will not make statement-X intending meaning-Y, if it is clear that a reasonable person employing common sense would take statement-X to mean something very different from Y. I will not fault others for reacting to what I said, even if it is not what I meant. I will uphold a norm of allowing all participants to recant what they said, and try again, as long as they acknowledge explicitly that that is what they are doing.
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 179 points) (
- Sapir-Whorf for Rationalists by 25 Jan 2023 7:58 UTC; 145 points) (
- “Rationalist Discourse” Is Like “Physicist Motors” by 26 Feb 2023 5:58 UTC; 113 points) (
- Deconfusion Part 3 - EA Community and Social Structure by 9 Feb 2023 8:19 UTC; 82 points) (EA Forum;
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 59 points) (EA Forum;
- Aiming for Convergence Is Like Discouraging Betting by 1 Feb 2023 0:03 UTC; 49 points) (
- 4 Feb 2023 1:48 UTC; 48 points)'s comment on Fucking Goddamn Basics of Rationalist Discourse by (
- Reply to Duncan Sabien on Strawmanning by 3 Feb 2023 17:57 UTC; 33 points) (
- 6 Feb 2023 2:56 UTC; 31 points)'s comment on EA’s weirdness makes it unusually susceptible to bad behavior by (EA Forum;
- 26 Feb 2023 7:21 UTC; 20 points)'s comment on “Rationalist Discourse” Is Like “Physicist Motors” by (
- 3 Feb 2023 22:09 UTC; 17 points)'s comment on Said Achmiz’s Shortform by (
- EA & LW Forum Weekly Summary (23rd − 29th Jan ’23) by 31 Jan 2023 0:36 UTC; 12 points) (
- 1 Feb 2023 2:28 UTC; 2 points)'s comment on Peter Thiel’s speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, ‘anti-anti anti-anti-classical liberalism’, Bostrom, LW, etc. by (
- 1 Feb 2023 1:52 UTC; 1 point)'s comment on Peter Thiel’s speech at Oxford Debating Union on technological stagnation, Nuclear weapons, COVID, Environment, Alignment, ‘anti-anti anti-anti-classical liberalism’, Bostrom, LW, etc. by (
- 5 Feb 2023 8:28 UTC; -3 points)'s comment on Fucking Goddamn Basics of Rationalist Discourse by (
I would like to propose two other guidelines:
Be aware of asymmetric discourse situations.
A discourse is asymmetric if one side can’t speak freely, because of taboos or other social pressures. If you find yourself arguing for X, ask yourself whether arguing for not-X is costly in some way. If so, don’t take weak or absent counterarguments as substantial evidence in your favor. Often simply having a minority opinion makes it difficult to speak up, so defending a majority opinion is already some sign that you might be in an asymmetric discourse situation. The presence of such an asymmetry also means that the available evidence is biased in one direction, since the arguments of the other side are expressed less often.
Always treat hypotheses as having truth values, never as having moral values.
If someone makes [what you perceive as] an offensive hypothesis, remember that the most that can be wrong with that hypothesis is that it is false or disfavored by the evidence. Never is a hypothesis by itself morally wrong. Acts and intentions can be immoral; hypotheses are neither of those. If you strongly suspect that someone has some particular intention with stating a hypothesis, then be honest and say so explicitly.
The latter guideline was inspired by quotes from Ronny Fernandez and Arturo Macias. Fernandez:
(He adds some minor caveats.)
Most of these seem straightforwardly correct to me. But I think of the 10 things in this list, this is the one I’d be most hesitant to present as a discourse norm, and most worried about doing damage if it were one. The problem with it is that it’s taking an epistemic norm and translating it into a discourse norm, in a way that accidentally sets up an assumption that the participants in a conversation are roughly matched in their knowledge of a subject. Whereas in my experience, it’s fairly common to have conversations where one person has an enormous amount of unshared history with the question at hand. In the best case scenario, where this is highly legible, the conversation might go something like:
In which case B isn’t currently maintaining two hypotheses, and is firmly set in a conclusion, but there’s enough of a legible history to see that the conclusion was reached via a full process and wasn’t jumped to.
But often what happens is that B has previously engaged with the topic in depth, but in an illegible way; eg, they spent a bunch of hours thinking about the topic and maybe discussed it in person, but never produced a writeup, or they wrote long blog-comments but forgot about them and didn’t keep track of the link. So the conversation winds up looking like:
A misparses B as having a lot less context and prior thinking about [P] than they really do. In this situation, emphasizing the virtue of not-jumping-to-conclusions as a discourse norm (rather than an epistemic norm) encourages A to treat the situation as a norm violation by B, rather than as a mismodeling by A. And, sure, at a slightly higher meta-level this would be an epistemic failure on A’s part, under the same standard, and if A applied that standard to themself reliably this could keep them out of that trap. But I think the overall effect of promoting this as a norm, on this situation, is likely ot be that A gets nudged in the wrong direction.
I feel uncomfortable with this post’s framing. It feels like someone went into a garden I spend my time in and unilaterally put up a sign with a list of guidelines people should follow in the garden, with no ability to enforce these. I know that I can choose on my own whether or not to follow these guidelines, based on whether I think they are good ideas, but newcomers to the garden will see the sign and assume they have to follow them. I would have vastly preferred that the sign instead say “I personally think these norms would be neat, here’s why.”
(to clarify: the garden = lesswrong/the rationalist community. the sign = this post)
I note that this sort of sentiment is something I was aware of, and I made choices around this deliberately (e.g. considered titling the post “Duncan’s Basics” and decided not to).
I do not quite think that these norms are obvious and objective (e.g. there’s some pretty decent discussion on the weaknesses of 5 and 10 elsewhere), but I think they’re much closer to being something like an objectively correct description of How To Do It Right than they are to a mere random user’s personal opinion; headlining them as “I personally think these norms would be neat” would be substantially misleading/deceptive/manipulative and wouldn’t accurately reflect the strength of my actual claim.
I think the discomfort you’re pointing at is real and valid and a real cost, but I have been wrestling with LessWrong’s culture for coming up on eight years now, and I think it’s a cost worth paying relative to the ongoing costs of “we don’t really have clear standards of any kind” and “there’s really nothing to point to if people are frustrated with each other’s engagement style.”
(There really is almost nothing; a beginner being like “how do I do this whole LessWrong thing?” has very little in the way of “here are the ropes; here’s what makes LW discourse different from the EA forum or Facebook or Reddit or 4chan.”)
I also considered trying to crowdsource a thing, and very very very strongly predicted that what would happen would be everyone acting as if everyone has infinite vetos on everything, and an infinite bog of circular debate, and as a result [nothing happening]. I simultaneously believe that there really actually is a set of basics that a supermajority of LWers implicitly agree on and that there is basically no chance of getting the mass of users as a whole to explicitly converge on and ratify anything.
So my compromise was … as you see. It wasn’t a thoughtless or light decision; I think this was the least bad of all the options, and better than saying “I personally think,” and better than doing nothing.
(I do think newcomers assuming they should generally follow these guidelines is an improvement over status quo of last week.)
I note that if this sparks some other, better proposal and that proposal wins, this is a pretty awesome outcome according to me; I do think there exist possible Better Drafts of this or nearby frameworks.
So far as I can tell, the actual claim you’re making in the post is a pretty strong one , and I agree that if you believe that you shouldn’t represet your opinion as weaker than it is. However, I don’t think the post provides much evidence to support the rather strong strong claim it makes. You say that the guidelines are:
and I think this might be true, but it would be a mistake for a random user, possibly new to this site, to accept your description over their own based on the evidence you provide. I worry that some will regardless given the ~declarative way your post seems to be framed.
What do you mean “over their own”?
I think I am probably misreading you, but what I think that sentence meant is something like:
Random newcomers to LW have a clear sense of what constitutes the core of good rationalist discourse
They’re more likely to be right than I am, or we’re “equally right” or something (I disagree with a cultural relativist claim in this arena, if you’re making one, but it’s not unreasonable to make one)
They will see this post and erroneously update to it, just because it’s upvoted, or because the title pretends to universality, or something similar
Reiterating that I’m probably misunderstanding you, I think it’s a mistake to model this as a situation where, like, “Duncan’s providing inadequate evidence of his claims.”
I’m a messenger. The norms can be evaluated extremely easily on their own; they’re not “claims” in the sense that they need rigorous evidence to back them up. You can just … look, and see that these are, on the whole, some very basic, very simple, very straightforward, and pretty self-evidently useful guidelines.
(Alternatively, you can look at demon threads and trashfires and flamewars and go “oh, look, there’s the opposite of like eight of the ten guidelines in the space of two comments.”)
I suppose one could be like “has Duncan REALLY proven that Julia Galef et al speak this way?” but I note that in over 150 comments (including a good amount of disagreement) basically nobody has raised that hypothesis. In addition to the overall popularity of the list, nobody’s been like, “nuh-uh, those people aren’t good communicators!” or “nuh-uh, those good communicators’ speech is not well-modeled by this!”
I think that, if you were to take a population of 100 random newcomers to LessWrong, well over 70% of them would lack some subset of this list and greatly benefit from learning and practicing it, and the small number for whom this is bad advice/who already have A Good Thing going on in their own thinking and communication are unusually unlikely to accidentally make a bad update.
Or, in other words: I think [the thing that you fear happening] is [a genuinely good thing], unless I’ve misunderstood you. Well-kept gardens die by a lack of good standards; the eternal September problem is real.
Okay, a few things:
I don’t think this so much as I think that a new person to lesswrong shouldn’t assume you are more likely to be right then they are, without evidence.
Strongly disagree. They don’t seem easy to evaluate to me, they don’t seem straightforward, and most of all they don’t seem self-evidently useful. (I admit, someone telling me something I don’t understand is self-evident is a pet peeve of mine).
I personally have had negative experiences with communicating with someone on this list. I don’t particularly think I’m comfortable hashing it out in public, though you can dm me if you’re that curious. Ultimately I don’t think it matters—however many impressive great communicators are on that list—I don’t feel willing to take their word (or well, your word about their words) that these norms are good unless I’m actually convinced myself.
Edit to add: I’d be good with standards, I just am not a fan of this particular way of pushing-for/implementing them.
Well, not to be annoying, but:
Your own engagement in these three comments has been (I think naturally/non-artificially/not because you’re trying to comply) pretty well-described by those guidelines!
I hear you re: not a fan of this method, and again, I want to validate that. I did consider people with your reaction before posting, and I do consider it a cost. But I think that the most likely alternatives (nothing, attempt to crowdsource, make the claim seem more personal) were all substantially worse.
The Less Wrong mods don’t agree with this view of rationalist discourse, do they…?
I mean, I have a deep and complicated view, and this is a deep and complicated view, and compressing down the combination of those into “agree” or “disagree” seems like it loses most of the detail. For someone coming to LW with very little context, this seems like a fine introduction to me. It generally seems like straightforward corollaries from “the map is not the territory”.
Does it seem to me like the generators of why I write what I write / how I understand what I read? Well… not that close, with the understanding that introspection is weak and often things can be seen more clearly from the outside. I’ll also note as an example that I did not sign on to Sabien’s Sins when it was originally posted.
Some specific comments:
I have a mixed view of 3 and 4, in that I think there’s a precision/cost tradeoff with being explicit or operationalizing beliefs. That said, I think the typical reader would benefit from moving in that direction, especially when writing online.
I think 5 is a fine description of ‘rationalist discourse between allies’. I think the caveat (i.e. the first sentence of the longer explanation) is important enough that it probably should have made it into the guideline, somehow.
I think 6 blends together a problem (jumping to conclusions) and a cure (maintaining at least two hypotheses). Not only will that cure not work especially well for everyone, it’s very easy to deploy gracelessly (“I do have two hypotheses, they’re either evil or stupid!”). Other cures, like being willing to ask “say more?”, seem like they might be equally useful.
I think 10 (and to a lesser extent, 7) seem like they’re probably directionally correct for many people, but are pointing at an important area with deep skill and saying “be careful” instead of being, like, actually reliable guidelines.
[Mod] I think they’re nice principles to aspire to and I appreciate it when people follow them. But I wouldn’t want to make them into rules of what LW comments should be like, if that’s what you mean.
Sure, but the OP is not suggesting making them into rules either, so on that point you clearly don’t disagree. But this part seems to run directly counter to what LW mods have said in the past, multiple times:
I think this has been one of the sources of conflict between Duncan and the mod team, yes.
Right, that was my impression. The reason I asked was that regardless of how much we all agree about any given set of guidelines (such as those described in the OP), it’s all for naught if we disagree about what is to be done when the guidelines aren’t followed. (Indeed that seems to me to be the most important disagreement of this whole topic.)
I also… predict that you and Duncan would get into conflict/disagreement about the operationalization of when/how to apply this particular norm.
Does Said have his equivalent of a Moderating LessWrong post? I do indeed feel like I’ve had norms-level disagreements with him in the past, but Said: I don’t actually have a clear sense of your position such that I could try to pass an ITT.
Hmm, I’m not sure that I have a unitary “position” here beyond agreement with the principle discussed in this comment thread. I have opinions on various aspects and details of the matter, of course, but I’d hesitate to give a summary or state an organizing principle without a good deal of thought (and perhaps not even then).
One thing that I would say is that in many cases, it seems like have “anti-rules” may be more productive than having “rules”. What I mean by that is: if Alice says X, and X is perhaps undesirable, it may not be necessary to have a rule “don’t say X”, if instead you have a rule “when Bob observes disapprovingly that Alice says X, and suggests that she’d better explain why she said X, consider this a helpful and good act on Bob’s part (for the purposes of determining what rules apply to the interaction)”.
Another way to look at this is that one doesn’t need to institute a plan to build something, if one can instead guarantee that those who wish to build that thing be allowed to do so, and not interfered with. (Here we can see parallels with community-building efforts “in the real world”, and legal obstacles thereto.)
Going up a meta level, I’ll say that I prefer to go down a meta level. In other words, I prefer to assemble general principles of this sort from object-level questions. For this reason, asking about an overall “my position” may not necessarily be fruitful.
Not that I know of, although he’s written a bunch of comments touching on it.
I’m thinking less of “high level principles” and more like “what things do you consider edge cases, or how to balance other principles when they’re in conflict.”
Yes, that seems almost certain.
I’d probably agree with it in some contexts, but not in general. E.g. this article has some nice examples of situations where “do the effortful thing or do nothing at all” is a bad rule:
It does feel to me like allowing people to be Stage 2 is a requirement for helping them get away from Stage 1 and up to the higher stages. And this bit in particular
sounds to me like the kind of a norm that would push people down to Stage 1 from Stage 2.
(This comment is just notes that I took while I was reading, and not some particular single thought I put together after reflecting on the whole post.)
I’m so honored to be on your list of “unusually good rationalist communicators”. I really want to see your description of what each of us does 2-10x more than random LW users. Not just because I want you to talk about me; mostly I imagine it would be really educational for me to read some of these people’s writings with your perceptions in mind, especially if I first read an excerpt from each and try to write down for myself what they are doing unusually well. I certainly think my own writing is a lot stronger on some of your discourse norms than on others.
>Some ways you might feel when you’re about to break the Nth Guideline:
<3<3<3 that you included these
Question about Guideline 4: Where do you think my tendency (or Anna’s tendency, or Renshin’s tendency) to communicate in the form of interpretive poetry instead of unambiguous propositions falls with respect to Guideline 4? Or, more precisely: What thoughts do you have when you hold “interpretive poetry that results from attempts to express intuitions” up next to “make your claims clear”?
I think I have felt confused about this for years. Earlier today, while trying to share my cruxes after saying that I don’t want to cross post something to the EA forum, one of my cruxes was “I will be eaten by piranhas.” I’m not yet sure, in the sense of being able to communicate about it in a, um, standard way, what I mean by this, although I can absolutely belief report that it’s among my major concerns with posting there. It’s actually quite unusual for me to speak this way on LW, even though I happen to have done so today, but I think that outside of Lesswrong, I speak in “interpretive poetry” quite a lot. This is much of why I have by and large *avoided* participating in Lesswrong, preferring to share most of my writings as Google docs, emails, on Facebook, or as posts to my private blogs instead. I worry that my poetry is not welcome here, perhaps for very good reason; yet I also strongly suspect that there’s a ton of valuable information contained in my poetry, and it seems a shame not to share it here just because it often takes me several years after having a thought before I’m able to figure out what clear and unambiguous propositions correspond to the thought. I just seem to do most of my thinking at a level that is so far below words that translating all the way up into standard English is extremely difficult; should I indeed keep quiet, until I have done enough work that I can express myself “clearly”?
...Apparently I was so captivated by the rest of the essay that I stopped taking notes.
Something about how you have written this makes me feel like I’m accidentally training you to write differently, perhaps by making you review my essays and occasionally yelling at you* about yours. I dearly hope it is in fact for the better, and that I am not instead dragging you down. Anyway for better or worse, I found this piece delightful to read, and I expect to think about it and refer to it often in the future. [*Yes I am aware this is not a great description of my actual behavior, but it sort of feels to me like yelling compared to my baseline.]
Aw man, I was hoping that “Sabien’s Sins” was going to be “Here’s how I, Duncan Sabien, often fail at good rationalist discourse, and here are some actual examples of bad things I have said, and here is what it was like and how I think about it.”
>if you read further, please know that you are doing the equivalent of reading dictionary entries or encyclopedia entries and that the remaining words are not optimized for being Generically Entertaining To Consume.
Well, *I* found the entire thing highly entertaining, at any rate. Perhaps you ought to write encyclopedias.
This is probably my favorite essay you have written. Let’s get married at a trampoline park.
I would also love a more personalized/detailed description of how I made this list, and what I do poorly.
I think I have imposter syndrome here. My top guess is that I do actually have some skill in communication/discourse, but my identity/inside view really wants to reject this possibility. I think this is because I (correctly) think of myself as very bad at some of the subskills related to passing people’s ITTs.
IMO, the answer here is a resounding “No!”
I think there’s a sort of unfortunate implication in the wording of the fourth guideline that I couldn’t quite erase without spending [so many words it ceased to be a simple statement].
But I do actually think “Do X or explicitly acknowledge that you can’t” means “Do X or Do Y” where Y is acknowledging that you can’t; I don’t actually think doing Y is worse than X, such that the fourth guideline says “Do X unless you suck.”
I mostly think of the fourth guideline as something like “everything in its place” or “everything with its proper epistemic status tags.”
I think there’s a T R E M E N D O U S amount of information that can be conveyed in poetry, that at least gets people looking in the right general direction or standing in the right general vicinity, and that a rationality community that taboo’d it because of its partial illegibility would be cutting off a major source of valuable intuition and wisdom.
(I especially think this because most of the skilled and generative original researchers I have met endorse thinking and speaking in poetry and would be horrified to find themselves in an environment where they could not.)
I think the Duty of an individual trying to not-undermine-rationality is to say “the following is poetry, because poetry is all I have; sorry; seems substantially better than nothing” at the start or the end of the poem. Then no one thinks the poetry is supposed to be airtight and fully legible, and thus the perceived standard of legibility is not undermined.
I think the short statement would be a lot weaker (and better IMO) if “inability” were replaced with “inability or unwillingness”. “Inability” is implying a hierarchy where falsifiable statements are better than the poetry, since the only reason why you would resort to poetry is if you are unable to turn it into falsifiable statements.
I changed it to say “aren’t doing so (or can’t).”
I just went to grab the link to Logan’s comment on the piranhas to note that in that context, I think including such a disclaimer would make the comment worse. I was sad to find that (I think?) they had edited it to have a disclaimer.
(there are other contexts where I think such a disclaimer is appropriate for logan-poetry-on-LW)
>I was sad to find that (I think?) they had edited it to have a disclaimer.
(Actually it originally had that disclaimer, or else I probably wouldn’t have posted it.)
Huh, I guess I misremembered (glad I hedged there). If it was there originally I didn’t notice it which is perhaps evidence that it didn’t, in fact, make the comment noticeably worse.
Yeah, on second thought, please take that as the spirit of a recommendation and not the letter; the main threat vector I see is causing people confusion about what constitutes rigor or precision or a literal claim. I agree that there are a lot of cases where “this is not a literal claim” is pretty obvious on the surface to all-but-Lizardman-constant of the audience, and in those cases do not think a sign saying HERE COMES A POEM is always or even often indicated.
I like this post a lot, and think there’s a decent chance I end up using it as a reference. I saw an earlier draft of this by-accident a year ago, and think this version is a significant improvement, has most of the correct caveats, etc.
I’m not endorsing them as “the norms of LessWrong”. I haven’t read through each expansion in full detail, and perhaps more importantly, haven’t thought the implications of everything (often someone will say “it’d good to do X in this circumstance because Y”, and I’ll nod along going “yeah Y sounds great, X makes sense”, and then later someone point out “emphasizing X has cost Z” that suddenly sign-flips my opinion, or add enough nuance to significantly change my belief.
I know there are at least a couple places here I have some hesitations on fully endorsing as stated.
But, I feel fairly good at least going-on-record saying “Each of the ideas here is something anyone doing ‘rationalist discourse’ should be familiar with as a modality, and shift into at least sometimes” (which I think is a notch below “guideline”, as stated here, which is in turn a notch below ‘rule’, which some people misinterpreted the post as saying).
I’ll hopefully have more to say soon after stewing on everything here more.
Here are some places I disagree a bit, or want to caveat:
#10 feels new, and not well-argued-for yet.
I think point #10 is pointing in a good direction, and I think there are good reasons to tend-to-hold-yourself towards a higher standard when directly modeling or assessing other’s internal states. It seems at least considering to make this “a guideline.” But, whereas the other 9 points seem to basically follow from stuff stated in the sequences or otherwise widely agreed upon, #10 feels more like introducing a new rule. (I think “be a bit careful psychologizing people” is more like an agreed upon norm, but it’s very fuzzy, and the way everyone else implements it feels pretty different from how Duncan implements it.”
I do think that “better operationalize ‘be careful psychologizing’ is an important (and somewhat urgent) problem”, so I have it on my TODO list to write a post about it. It may or may not jive with Duncan’s operationalization.
I do think there is some amount of “Duncan just cares about this in a way that is relatively atypical, and I haven’t heard anyone else really argue for”. “Hold yourself to the absolute highest standard” feels like a phrasing I don’t expect anyone else to endorse. (Note: if you endorse that phrasing, do feel free to reply here and say so!).
(I do think the expansion-description of the norm is something I expect a fair number of LWers to endorse)
I think Zack Davis is onto something with point #5
Zack has a response post. I haven’t fully parsed the post and formed an all-things-considered opinion on it, but I think it’s articulating an important alternate frame on #5 and it’s one of the reasons I feel most like these are not “basics of rationalist discourse”, but, “basics on one type of rationalist discourse.”
I think it’s true that “aim for convergence on truth” isn’t (necessarily) in conflict with the sort of competitive debate style group-truthseeking that Zack prefers. But it seems to me much of the point of the stated phrasing is to create a particular vibe, that percolates through heated disagreement, and I think there are other vibes that accomplish other goals (even within the space of solving the particular set of problems here)
I think Habryka (my boss) also often prefers something similar to Zack’s conversational vibe, and I’ve come to find a value in it sometimes.
(Note: Duncan has blocked Zack from commenting on his posts, so Zack can’t respond here. I think this is a fine use of the author moderation tools we built (see my original reasoning), and Zack writing a response post in a separate thread is also a fine/intended response. But, seemed potentially important for people to be modeling)
I feel anxious about the difference between “these norms are good” and “the way Duncan would enforce these norms is good.”
I have some kind of longstanding disagreement with Duncan about how to approach norms, and I still don’t really know for sure what our disagreement is about, but I feel a bit anxious about the bridge between “maybe having these guidelines be an improvement over LW status quo” and “being excited about Duncan proactively pushing for them.”
Duncan reiterated in the comments here that these are not rules, they’re guidelines. He’s also used the word “norms”. IMO the word “norm” technically includes both rules and guidelines but connotationally feels more like “rules” and I think there’s a very strong slope towards this. (And people are correct to be wary of a strong slope towards this)
When I mentioned I was writing this comment, Vaniver said “I think it’d be better if you specifically addressed the part where Duncan’s enforcement approach is motivated strongly by protecting the Neville Longbottoms of the world.” Something about this [Vaniver’s suggestion of it] feels off to me, like, it’s not really my crux, and I’m not sure it even interfaces with my desire to flag it here (which is not really justifying myself to Duncan or trying to hash anything out, just noting to the LW community my overall position here, since I curated this post).
But, doesn’t seem wrong to do, so, noting:
I don’t really alieve that [the vague (conflict)-y?] style of moderation I expect Duncan to veer towards is as helpful as he thinks for helping the Neville Longbottoms of the world. I believe that some people message him about things like this. Maybe if I had read those messages I’d feel differently. But I… also expect Duncan-style moderation to have at least some kind of negative effect on some Neville Longbottoms, which isn’t as visible. (I personally find it very expensive to give Duncan negative feedback, have heard this is true from other people, and so don’t trust his assessment of his overall Neville Longbottom impact. Maybe also disagree on “who counts as a Neville Longbottom”)
Insofar as it was the best way to protect Nevilles, I do just think it’s super costly, and while I value protecting Nevilles I don’t think it’s near the top of my list of things worth burning this many resources for (both my personal resources, and communal “willingness to engage in conflict.”). I recognize that is sad for Nevilles.
Not quite Neville-specific but relevant: a central crux of mine re: Duncan is that his norm-enforcement approach doesn’t feel compatible with the other people who seem, to me, like they should be his closest allies. Norm Innovation and Theory of Mind is my elaboration on this.
i.e. the fact that Duncan is somehow blocking Zack Davis instead of figuring out how to work with him, when I think I’ve probably learned/updated more from Zack Davis on many topics that Duncan cares strongly about than I have from Duncan, feels like something has gone pretty wrong somewhere. (Also, I think I’d probably count Zack specifically as a Neville? When Duncan objects to many of Zack’s phrasings of things [which I agree with Duncan are misleading/bad-some-way], I parse the situation more like Neville trying to stand up for something while not quite having the skills to do it, where my primary impulse is to try and shelter/protect that so it feels safe enough to pursue it in a more healthy way)
(I feel a bit awkward using this example where Zack can’t directly reply, and might move some of this onto Zack’s thread if it feels appropriate. Sorry Zack. My vague model of you endorses me saying things without worrying overmuch about how you feel about it in the moment, but, uh, sorry if wrong)
I’m almost certain that I’ve commented on this before, and I really don’t mean to start that conversation again… but since you’ve mentioned, elsethread, my potential disagreements with Duncan on rule/norm enforcement, etc., I will note for the record that I think this facility of the forum (authors blocking individual members from commenting on their posts) is maybe the single worst-for-rational-discussion feature of Less Wrong. (I haven’t done an exhaustive survey of the forum software’s features and ranked them on this axis, hence “maybe”; but I can’t easily think of a worse one.)
Maybe placing a button that leads to a list of blocked users under each post (with a Karma threshold to avoid mentioning blocked users that could be spammers or something, with links to optional explanations for individual decisions) would get the best of both worlds? Something you don’t need to search for. (I’d also add mandatory bounds on durations of bans. Banning indefinitely is unnecessarily Azkaban.) Right now AFAIK this information is private, so even moderators shouldn’t discretionally reveal it, unless it’s retelling of public info already available elsewhere. (Edit: It’s not private, thanks to Said Achmiz for pointing this out.)
Being able to protect your posts seems important to some people, and an occasional feud makes it a valuable alternative to involvement of moderators. But not seeing the filtering acting on a debate hurts clarity of its perception.
Isn’t this info visible at https://www.lesswrong.com/moderation ?
So it is, thanks for pointing this out. I’ve even seen this before, but since forgot to the point of not being aware of this existing when writing my comment. From the way Duncan phrased his reply he either also wasn’t aware of this, or there is a way of banning users privately as opposed to publicly, so that these decisions don’t show up on that page.
This is exactly the kind of situation I meant to rule out by saying that the list of banned users should be directly accessible from each affected post, as something you don’t need to search for.
Yes, sure, if you’re going to allow people to ban users from their posts, the list of these banned users should be prominently displayed on each post. But this is a fairly weak mitigation. Much better to not have the feature in the first place.
(I was not aware but also am fortunately not ashamed of my blocks, so)
I made my block of Zack public, so Ray has not done anything amiss.
If we instead had a culture that would ban Zack for confidently, emphatically, and with-no-hedges calling other users insane when he hasn’t even finished reading their claim, it would be less necessary. I have very few users blocked in this way.
But such a culture would be bad, so it is good that we don’t have it.
I haven’t seen this comment you refer to (I gather it’s been deleted or edited), so I will refrain from opining on whether perhaps some mild censure would be warranted for such. Certainly this is not out of the question.
This hardly makes it better, and I think you can probably see why.
I (tentatively, since we haven’t had a real discussion and it would be silly to be confident) think the culture you are envisioning is made of fabricated options.
(Link is to a specific section discussing block lists in particular.)
In particular, I think you underestimate the corrosive power of a flood of bullshit, and how motivated some people are to produce that flood. Concentration of Force also feels relevant here (and yes, I see the irony).
That entire argument (which I’d read before, but re-read now) seems to me to be wrongheaded in so many different ways that I’d have to write a response at least twice as long to untangle it. Certainly the sort of (indeed straw) view you describe has almost nothing in common with my view on the matter; and it also entirely fails to mention what I consider the most important aspect of the question. There are also background assumptions which I consider to be totally wrong, implied values which seem to me to be thoroughly undesirable, etc.
I think I’d have to see some examples of what you’re thinking of here before I could have any opinion on whatever this is.
Could you elaborate? What exactly is the relevance?
Please, do so.
That would be exactly the sort of “starting that conversation again” that I said I didn’t want to do…
Just giving you an opportunity to hang “this is wrongheaded” on something more legible than “Said opaquely asserts that it is so.” Perfectly fine for you not to feel like taking it.
>”Hold yourself to the absolute highest standard” feels like a phrasing I don’t expect anyone else to endorse. (Note: if you endorse that phrasing, do feel free to reply here and say so!).
I agree with this phrasing, as I understand it. It seems important to note that by “hold yourself to the absolute highest standard” in this context, what I mean is “make the very best effort you’re capable of to follow the rest of these guidelines, taking no shortcuts and slowing down however much is necessary to accomplish this”, as opposed to something more like “consider yourself a terrible person if you fail to uphold any virtue whatsoever while modeling or assessing others’ internal states”.
Hmm. I don’t think most people will interpret “hold yourself to the absolute highest standard” that way.
I agree, that’s a very strange interpretation of “hold yourself to the absolute highest standard”, to the point where I’d say “no, that’s just not what it means to hold someone to some standard”.
Er. Are you intentionally conflating [holding yourself to a standard] with [holding someone else/someone else holding you]? Because those are very different mental and social motions and I’m not sure why you just elided the distinction.
How are they different?
I believe the burden of proof/burden of explanation here is on “how are these two obviously different things the same?”
I’ll be happy to ELI5 if you genuinely try to figure out how they’re different and genuinely fail, but I have a hard time believing that this will be necessary.
I don’t understand how they aren’t the same. It wouldn’t have occurred to me to make this distinction before you said anything, and it’s still not occurring to me now.
Holding someone else to some standard: if they perform below the standard, consider them to have failed, act accordingly.
Holding yourself to some standard: if you perform below the standard, consider yourself to have failed, act accordingly.
What’s the difference? It can’t be that you don’t have to “act accordingly”, that’s true in both cases. It can’t be the matter of who knows the person failed—in both cases, that’s (a) the person who failed, (b) the person doing the standard-to-holding, (c) any onlookers. (In the latter case, (a) and (b) are the same person.)
So what is the important difference you’re seeing?
This does not yet seem to me like a genuine attempt to figure it out; this reads to me like what you actually attempted to do was confirm in your own head that you were correct.
Come on. Why am I the one who needs to be making a “genuine attempt to figure it out”? Have you given me any reason to believe that you know something that I don’t, that you’re more right about this? You disagreed with something I said, and that’s fine. Tell me why. I explained my view (even though it seemed obvious to me), but you’ve explained nothing, and only claimed that your view was obvious. I think the ball is clearly in your court, and the “instructor” posture is, to put it mildly, unhelpful.
Because the claim you’re making is “two completely different things are effectively the same.”
Note that I didn’t say you “need” to make the genuine attempt. I said that’s what it would take for me personally to be happy to explain it to you.
You’re welcome to not pay that price of entry! But I’d prefer you not try to pass off rehearsing your own position as paying that price of entry.
So you say. But I say that that the claim you are making is “two things that are effectively the same are completely different”.
Thus by the same token, it’s you who should be making a genuine attempt to figure out what I mean!
But of course this is silly. Again: you disagree with a thing I said, and that’s fine. Tell me why. This shouldn’t be hard. What, in brief, is the difference you see between these two (allegedly) obviously different things?
Also, I must note that I did, in fact, “genuinely try to figure out how they’re different”, and failed (genuinely, one assumes?). I have no idea why you would suggest otherwise. I can’t read your mind, so I have no idea even what sort of objection you are thinking of. Short of that, analyzing the apparently relevant aspects of the question is all I can do, and is what I did. (Indeed even doing this, for what seems to me to be an obvious point, was motivated by an unusually high degree of trust that my interlocutor had some non-stupid reason for disagreeing, so it seems quite absurd to have it met with a suggestion that it was somehow insufficient.)
The reason I would suggest otherwise—
(Actually, I note that I was careful to only talk about my perceptions and not make claims about your internal state, because indeed I have no access to it)
—the reason that I stated that it did not seem to me to be the case that you had made a genuine effort is because your comment both:
a) bears hallmarks/markers that one would expect to find in someone rehearsing their preexisting belief, such as listing out justifications for that belief
b) conspicuously lacks hallmarks/markers that one would expect to find in someone making a serious attempt, such as offering up any hypotheses at all (even if tenuous; “the best I was able to come up with was X, but I’m pretty sure that’s not what you’re thinking”), or making visible a thought process that contains (e.g.) “okay, but if this were true, what sorts of things might I expect to see? Hmmm...”
If you were to collect 1000 examples of people genuinely trying to squint their way across an inferential gap, very very few of them would look like yours.
If you were to collect 1000 examples of people who were mostly just paying lip service to the idea of entertaining an alternative hypothesis, and primarily spending their time rehearsing their own arguments, the vast majority of them would look a lot like yours.
(Additionally, though this is small/circumstantial, I’m pretty sure your comment came up much faster than even a five-minute timer’s worth of thought would have allowed, meaning that you spent less time trying to see the thing than it would have taken me to write out a comment that would have a good chance of making it clear to a five-year-old.)
Basically, it would be unreasonable for someone to conclude, based on looking at your comment, that you had probably put forth anything resembling a genuine effort. It’s certainly possible; the representativeness heuristic can lead us astray and it’s not always what it looks like. But the safe bet is clear.
Another possibility is that he did some of his thinking before he read the post he was replying to, right? On my priors that’s even likely; I think that when people post disagreement on LW it’s mostly after thinking about the thing they’re disagreeing with, and your immediate reply didn’t really add any new information for him to update on. Your inference isn’t valid.
I agree that it wouldn’t be valid as an absolute, or even as a strong claim.
I’m not sure I agree that it is no evidence at all.
(speaking loosely) This is such a weird conversation, wtf is happening.
(speaking not so loosely) I think I’m confused? I have some (mutually compatible) hypotheses:
H1) the concept “burden of proof” is doing a lot of STUFF here somehow, and I don’t quite understand how or why. (Apparently relevant questions: What is it doing? Why is it doing it? Does “burden of proof” mean something really different to Duncan than to Said? What does “burden of proof” mean to me and where exactly does my own model of it stumble in surprise while reading this?)
H2) Something about personal history between Duncan and Said? This is not at all gearsy but “things go all weird and bad when people have been mad at each other in the past” seems to be a thing. (Questions: Could it be that at least one of Duncan and Said has recognized they are not in a dynamic where following the rationalist discourse guidelines makes sense and so they are not doing so, but I’m expecting them to do so and this is the source of my dissonance? Are they perhaps failing to listen to each other because their past experiences have caused strong (accurate or not) caricatures to exist in the head of the other, such that each person is listening mainly to the caricature and hearing mainly what they expect to hear by default? What exactly is their past history? How much do which parts of it matter?)
H3) Duncan and Said have different beliefs about the correct order of operations for disagreements (or something like that). Perhaps Duncan emphasizes “getting structural discourse practices in proper order first”, while Said emphasizes “engaging primarily with the object level topic by whatever means feel natural in the moment, and only attending to more structural things when stuck”. (Questions: Is this true? Why the difference? Are there times when one order of operations is better than another? What are the times?)
FWIW, it’s not at all clear to me, before really thinking about, what the difference is between “holding oneself to a standard” and “holding someone else to a standard”. Here’s what happens when I try to guess at what the differences might be.
1) Maybe it has something to do with the points at which intervention is feasible. When holding yourself to a standard, you can intervene in your own mind before taking action, and you can also attempt to course-correct in the middle of acting. When holding someone else to a standard, you can only intervene after you have observed the action.
2) Like 1, except since you can also intervene after observing the action when holding yourself to a standard as well, “holding yourself to a standard” is an umbrella covering a wider range of thingies than “holding someone else to a standard”, but some of the thingies it covers are the same.
3) Perhaps the difference is a matter of degree, for some reason? Like perhaps there is something about holding other people to standards that makes the highest standard you can reasonably hold someone to much lower than the highest standard you can reasonably hold yourself to, or (less plausibly?) vise versa.
Of these, 2 certainly seems the closest to matching my observations of the world in general; but it does not help me make sense of Duncan’s words as much as 1 does.
There’s also a huge distinction between the set of standards it’s possible to try to hold oneself to, which is a set you will mostly feel on-board with or at worst conflicted about—
(Like, when you try to hold yourself to a standard you either think it’s good/correct to do so or at least a part of you thinks it’s good/correct to do so)
—versus the set of standards you could try to hold someone else to, which contains a lot of stuff that they might reject or disagree with or think stupid, etc.
The kinds of conflict that can emerge, internally, from trying to hold myself to some standard are very very different from the kinds of conflict that can emerge, interpersonally, from trying to hold someone else to some standard. The former has way fewer ways in which it can go explosively wrong in the broader social web.
Disclaimer: I know Said Achmiz from another LW social context.
In my experience, the safe bet is that minds are more diverse than almost anyone expects.
A statement advanced in a discussion like “well, but nobody could seriously miss that X” is near-universally false.
(This is especially ironic cause of the “You don’t exist” post you just wrote.)
Yes, that’s why I haven’t made any statements like that; I disagree that there’s any irony present unless you layer in a bunch of implication and interpretation over top of what I have actually said.
(I refer you to guideline 7.)
Can you explain what you mean by “the Neville Longbottoms of the world”? Who are these people, what are their defining characteristics? (I’ve read HPMOR, but some people—so I am told—have not; and even I don’t really understand what you mean here.) What is involved in “protecting” them…?
Oh, also: I think Jim’s comment on #6 also felt like an important point to me.
(Agreed; I upvoted at the time though haven’t figured out how to respond or how to act on it yet.)
My only major disagreement with the above is that I don’t think Zack’s actually responding to my post, and wouldn’t characterize it as such. =P
I think Zack’s post has some interesting thoughts, and it’d be much easier to get at those thoughts if they weren’t [pretending to be in response to me but actually responding to a caricature of Zack’s invention]. If someone keeps saying things you agree with as if they are defeaters of your previous point, it’s … exhausting.
I have a post in the works about how things which seem to be quite close to each other (like me and Zack) are actually often VERY distant (it’s kind of trying to flip the valence on “the narcissism of small differences”), and their apparent closeness an artifact of something like a blind spot or colorblindness.
I… roll to disbelieve that this is your only disagreement with the above.
My only major one.
You raise some hypotheses where I take the other side, but I think the conversations are worth having and it’s not obviously dumb for you to have the preliminary positions you have.
Well, if Zack’s post isn’t properly understood as a response to yours, then there’s a disconnect somewhere. Obviously, if you say that you didn’t mean what Zack characterized you as saying, I believe you; but the question then is—what did you mean? I confess that I can’t see how to apply your point #5 either.
Er. If you’re actually saying “the burden of proof is on you to demonstrate that Zack hasn’t properly characterized your point,” as opposed to “it’s on Zack to demonstrate that he has,” then I’m not sure how to productively begin.
You may not be saying that. But Zack’s post reads, to me, like someone who read the bullet point, leapt to a conclusion about several dumb things that it meant, refused to read the expansion, and is off to the races. (This accords with my previous experiences of Zack.)
If you’ve read the expansion on 5 and you’re still confused, I’m happy to answer questions.
No, I am not saying that. All I meant was that I read your explanation, I had some thoughts about it; then I read Zack’s post, thought “yep, sounds about right”; then I read your comment on Zack’s post and was confused. Apparently, what I (and Zack?) understood you to be saying is not what you meant to say.
What you should do with this information is up to you, of course; I don’t say that you have any obligation here, as such. And, of course, there are some things that I could do: I could re-read that part of your post, I could ask specific questions, I could think more and harder, etc. Will I do some of those things? Maybe.
I don’t think that Zack’s reading is going to be especially representative; I think that a supermajority of people would not independently generate an understanding of 5 that matches his.
(Something different happens if people are given a Multiple Choice question where Zack’s interpretation is one of four or five possible interpretations; there I suspect it is an attractor that would drag more people in. This is most of why it’s important to me that he be understood to not actually be responding to me, rather than to his own strawman.)
If I were to discover that e.g. half of readers interpreted me the way Zack did, this would mean that I urgently needed to rewrite the post to head off those misunderstandings at the pass.
But I don’t currently anticipate that.
FYI I generated an understanding of 5 that was similar enough to Zack’s understanding to also nod along with his post and think “yup, sounds about right”(ish).
5 and 10 do feel like the weakest ones/the ones most likely to earn a rewrite that manages to strengthen them substantially.
But, like. It is specifically because of an anticipation that users like Zack would immediately and enthusiastically leap to recalcitrant strawmanning that I felt I had to post this wholesale as a complete list rather than ask “hey, I’ve got like eight of these I feel good about and two more I need help with; whaddyathink, LW?”
Like, what would’ve allowed a discussion on the merits of 5 and 10 to productively proceed is the prereq of enough-of-something-like-5-and-10 already being in the water (plus maybe a healthy dose of 8 and 9), and I was not at all confident we have that.
I for sure think that a conversation with Julia and Vaniver and Scott, etc., on how to create the better thing that 5 and 10 were trying to point to, would be lovely, and would work.
I feel like you’re not doing the mirror of guideline 8, here? Like, you’re being asked to restate or clarify your point, and your response looks to me like “well, until you show that you got it the first time around, I’m not going to clarify it.” If they got it the first time around, why would they need clarification?
Here and in one other notable place in this larger back-and-forth, I wasn’t asking him to show me that he understood it; I was asking him to share the labor of getting him across this inferential gap.
This was written when I thought Vaniver’s question was in the other place, so it’s a smidge odd as an answer here, but:
If someone asks me to explain why I think the sky is blue, especially if it’s someone who’s historically been a mixture of hostile, dismissive, and personally critical, I am suspicious that anything worthwhile will come from me putting forth effort.
(Here I’m basically claiming “I would have answered this question differently if it had come from Vaniver, or RandomLWUser420.”)
If they demonstrate that they’re really actually curious, by e.g. showing a little of their own attempt to figure out why I might have this belief, I am reassured, and more willing to give them the effort.
But of course, they’re welcome to say “not gonna jump through a hoop,” and in my world we have then achieved cooperation (in the form of each of us noting what we’re not interested in doing, and not doing it).
(I’m not super motivated to correct misunderstandings in the heads of Said or Zack particularly, so I didn’t have a want, myself, along the lines of “please let me try again.”)
Another way to say this:
Neither Said (who was actually present in the conversation) nor Zack (who was spiritually present and being invoked and very much at the forefront of my mind) seems to me to ever bother with the sixth guideline/split-and-commit, and more locally Zack was not bothering with it in his post in any genuinely substantive way, and Said was similarly not bothering with it in his back-and-forth with me.
“My conversational partner is willing to flex their sixth guideline muscles from time to time” is a prerequisite for my sustained/enthusiastic participation in a conversation.
In my experience, Said, is pretty good at not jumping to conclusions in the ‘putting words in their mouth’ sense, tho in the opposite direction from how your guideline 6 suggests. Like, my model of Said tries to have a hole where the confusions are, instead of filling it with a distribution over lots of guesses.
I remember at one point pressing him on the “but why don’t you just guess and get it right tho” point, but couldn’t quickly find it; I think I might have been thinking of this thread on Zetetic Explanation. I don’t use his style, but it does seem coherent to me and I’m reluctant to declare it outside the bounds of rational conversation, and more than once have used Said as the target audience for a post.
This seems right and fair to me, and I think you and others feeling this way is a huge force behind the “we’re going to try to make LW fun again” moderation push of the last ~5 years.
While I also really like this post, I am confused by your reasoning. You want to have it as a reference because “Each of the ideas here is something anyone doing ‘rationalist discourse’ should be familiar with as a modality, and shift into at least sometimes”. I would like to know what you mean, because to me it sounds like having it as a reference to use when you think the other side in a debate should obey the standards, whereas you do not want to be restricted by the same set of norms. Would you like to elaborate?
Woah, not what I meant at all.
Duncan goes out of his way to say “this post is guidelines, not rules.” I go a bit further, saying “people should at least be capable of going into the modes listed here” which is not meant to say anything at all (yet) about which sorts of situations make this relevant.
I might want to use this, as a moderator, when I notice an argument going badly on LW (between two people who are not me), and saying “hey guys, I think this might go better if Alice and/or Bob were trying to do [guideline X]”, without that being a definitive statement about “On LW you should being doing X most of the time.”
I want to be on record as someone who severely disagrees with OP’s standards. I want that statement to be visible from my LessWrong profile.
Here are N of my own standards which I feel are contrary to the standards of OP’s post:
I aim to ensure every discussion leaves both parties happier that it happened than not, and I do hope you will reciprocate this
I’ll go through the motions with you if you’re invested in what I think; preset guidelines are great, but I’ll always be happier if you ignore them and talk to me instead of saying nothing; I’ll negotiate adequate guidelines if necessary
Tell me what you’re thinking and feeling as fast as you want; I love impulsive responses! The worst thing you can do on impulse is to permanently end all discussion.
Leadingness (the opposite of misleadingness) is more important than truth, though truth is important. If a map is supposed by many to adequately reflect the territory, and yet it does not mark any CEV-threats or CEV-treasures that are in the territory, then that map is not going to help me much!
Hold me accountable to the twelfth rationalist virtue. If I think you’re an exceptionally virtuous person, your input will interest me no matter how poorly you substantiate yourself at first. Be daringly fallacious and dramatic. Wrench me from my delusions. Keep me sharp.
I’m not like those other pretenders to open-mindedness! I’m kakistocurious!
I quite like your list, but also don’t feel like it’s hugely in conflict with the OP.
Thanks for taking the time to register specific disagreement!
My reactions to this small sampling of your standards:
I think that 1 is quite important, and valuable, but subordinate to the above in the context of discourse specifically trying to be rational (so we do have disagreement but probably less than you would expect).
I think that characterizing this stuff as “going through the motions” is a key and important mistake; this is analogous to people finding language requests tedious and onerous specifically because they’re thinking in one way and feel like they’re being asked to uselessly and effortfully apply a cosmetic translation filter; I think that applying cosmetic translation filters is usually bad.
I just straightforwardly agree with you on 3, and I don’t think 3 is actually in conflict with any of the things in the post.
4 is the place where I feel closest to “Maybe this should supplant something in the list.” It feels to me like my post is about very basic kicks and blocks and punches, and 4 is about “why do we practice martial arts?” and it’s plausible that those should go in the other order.
5 feels to me as if it’s pretty clearly endorsed by the post, with the caveat that being daringly fallacious and dramatic works in my culture when signposted (which, as Ray points out under Logan’s thread, does not have to be explicit).
6 seems to be more like a … mood? … rather than a standard; it feels different from the other elements of your list. I am for sure kakistocurious, though probably less than you by a good bit if you consider it central to your personality.
I agree in some sense that for the purpose of my learning/interest, I would rather people err on the side of engaging with less effort than not engaging at all. However, I think community norms need to be more opinionated/shaped because it influences the direction of growth.
The culture I’ve enjoyed the most is one where high standards is considered desirable by the community as a whole, especially core members, but it is acceptable if members do not commit to living up to those standards (you gain respect for working like a professional, but it is acceptable if you just dabble like an amateur):
You are only penalised for failing to fulfill your responsibilities/not meeting the basic standards (e.g. being consistently late, not doing your work) and not for e.g. failing to put in extra effort. You have the freedom to be a hobbyist, but you are still expected to respect other people’s time and work.
Good norms are modelled and highlighted so new members can learn them over time
You need to work at the higher standards to be among the successful/respected within the group (the community values high quality work)
People who want to work at the higher standards have the space to do so (e.g. they work on a specific project where people who join are expected to work at higher standards or only people who are more serious are selected)
I like it because it feels like you are encouraged or supported or nudged to aim higher, but at the same time, the culture welcomes new people who may just be looking to explore (and may end up becoming core members!). It was for a smaller group that met in person, where new people are the minority, and the skill is perhaps more legible, so I’m not sure how that translates to the online world.
It’s also fun being in groups that enforce higher standards, but the purpose of those groups tend to be producing good work rather than reaching out to people and growing the community.
so I read in Rational Spaces for almost a decade, and almost never commented. when i did commented, it was in places that i consider Second Foundation. your effort to make Less Wrong is basically the only reason I even tried to comment here, because i basically accepted that Less Wrong comments are to adversarial for safe and worthwhile discussion.
In my experience—and the Internet provide with a lot of places with different discussion norms—collaboration is the main prediction of useful and insightful discussion. I really like those Rational Spaces when there is real collaboration on truth-seeking. I find a lot of interesting ideas in blogs where comments are not collaborative but adversarial and combative, and I sometimes found interesting comments, but i almost never found interesting discussion. I did, however find a lot of potentially-insightful discussions when the absent of good will and trust and collaboration and charity ruined perfectly good discussion. sometimes it was people deliberately pretend to not understand what people said, and instead attacking strawman. sometimes (especially around politics) people failed to understand what people say and was unable to hear anything but the strawman-version of an argument. a lot of times people was to busy trying to win an argument so they didn’t listen to what the other side actually trying to convey. trying to find weak part of the argument to attack instead of trying to understand vague concept in thingspace that a person is trying to gesture to.
the winning a argument mode is almost never produced new insights, while sharing experiences and exploring together and not trying to prove anything is the fertile ground of discussion.
All the rules in this list are rules I agree to. more then half will facilitate this type of environment. and other things you wrote that I read make me believe you find this find of collaborative spirit important. but this is my way of seeing the world, in which this concept of Good Will is really important, and more then half of this rules look like ways to implement in practice this concept. and I’m not sure this is the way you think about those things, or we see the same elements of the territory and map them differently.
if i was writing those rules, i would have started with “don’t be irrationally, needlessly adversarial, to wrongly fulfill your emotional needs, for example: [rules 2,3,5, 6,7,8,9,10]”
but there is enough difference that i suspect there is other concept, near my Good Will concept but different from it, around which those rules cluster, that i don’t entirely grasp.
can you help me understand if such a concept exist, and if yes, point to some posts that may help me understand it?
(I haven’t been able to come up with a useful reply to this comment yet but I wanted to note that I appreciated it.)
Thank you for making this post—I found it both interesting and useful for making explicit a lot of the more vague ideas I have about good discussions.
I have a question/request that’s related to this: Does anyone have advice for what you should do when you genuinely want to talk to someone about a contentious topic—and you think they’re a thoughtful, smart person (meaning, not an internet troll you disagree with)-- but you know they are unlikely to subscribe to these or similar discourse norms?
To be frank, I ask this because I’m transgender (female-to-male) and like to discuss ideas about sexuality, sex, and gender with other trans people who aren’t part of the rationalist/adjacent community and just have different discourse norms.
To give an example, let’s say I mention in a post that it feels relevant to my experiences that my sex (at birth) is female, so I still identify as being “female” in some sense even though I’m socially perceived as male now. There’s a good chance that people will see this as asserting that trans women aren’t female in that same sense, sometimes even if I take care to explicitly say that isn’t what I mean. So in that case it’s specifically point 7 (be careful with extrapolation), though also many of the others come up often too.
For the record, I have a lot of understanding about people who have reactions like that. Many people who are openly trans on the Internet, or part of some other group that gets disproportionately targeted, have had to deal with a large number of harassing posts and comments (and I mean blatantly harassing, like telling them that they’re ugly or telling them to commit suicide) and have a lot less patience for people who might actually just be bad-faith jerks because, in their experience, a really large percentage of people are bad-faith jerks and they need to set a sort of “mental filter” so they don’t waste their time and energy talking to people who, in the end, don’t actually have the goal of fruitful discussion.
These discourse norms rely on both participants being willing participants, and though in my opinion that works well on LessWrong and similar spaces, on the internet as a whole there are places where it just doesn’t. But sometimes I want to talk to someone even though we are in a place like that.
I don’t have any big, epiphanic generalized advice, but one thing that feels pretty useful:
People in my experience are almost always willing to make one stretch, in a conversation, especially if it’s acknowledged as a stretch.
Like, if you ask them “okay, look, for the next five minutes, I’d like to use words in this particular way, and I get that you don’t want to use words that way in general and that makes sense, too, but if you could do me a favor …”
Usually, in my experience, people are open to those requests? So it boils down to something like, thinking strategically about which single discourse norm you’d find most useful, for which single five-minute chunk.
(This tends to have the benefit of making those groups more accustomed to receiving and granting small discourse requests in general, which is helpful for everybody.)
And you can, like, balance it out, too—you don’t always have to have the request be “more of a certain kind of rigor or precision.” Like, you can sometimes say “okay, I can’t express this in reasonable, fair words, so what I’d like to do is spew some unfair gunk and then go back afterward and cut out the parts I don’t endorse. Is that okay with you all? Like, can I just get the words out, first, and then we can go back and strike some of them?”
Another important piece of this puzzle in my experience is making such agreements with specific conversational partners. Like, if you’re on a Discord with 1000 people, it’s impossible to get them all to shift modes at once, but you can usually manage to do something like “Hey, username, can I try a different mode real quick?” and then either just don’t engage with other people butting in, or gently say “yeah, I’m doing a weird thing with username right now, scroll up for details” or whatever.
I’ll preface my comment by acknowledging that I’m not a regular LessWrong user and only marginally a member of the larger community (I followed your link here from Facebook). So, depending on your intended audience for this, my comments could be distinctively useful or unusually irrelevant.
I’m terribly grateful for the context and nuance you offer here. The guidelines seem self-evidently sensible but what makes them work is the clarity about when it is and isn’t worth tolerating extra energy and pain to follow them. A few notes that are almost entirely meta:
1) I suspect that nearly all objections people have to these can be forestalled by continued editing to bake in where and how they properly apply—in particular, I imagine people emotionally reacting against these because it’s so uncomfortable to imagine being hit with criticism for not following these guidelines in cases like:
A public opinion or social conflict situation that is definitely not a collaborative search for truth
Sharing painful emotions or calling attention to an observable problem
Seeking help expressing a nascent idea or self-insight that has to go through a shitty first draft before one is ready to communicate it with nuance and precision.
Your expansions make it perfectly clear that you recognize situations like these and believe people should handle them in effective and/or compassionate ways—my impression is that they either don’t fall into the domain of “rationalist discourse” or that rationalist discourse can create a container allowing not-rationalist-discourse to exist within it (as you described in the comment thread with LoganStrohl about signaling when something is poetry). So I’m mentioning them only to call attention to misreadings that might, with superb editing, be avoided without weighing down the language too much.
2) I’d be interested to know more about how you see this resource being used. If you see it as something that could become a key orientation link for less-experienced members, then perhaps including a little bit of expansion amid the list would be helpful. If you see it as something that experienced members can point one another to when trying to refine their discourse, it might be useful to promote a little bit of the text about not weaponizing the list / not using it as a suicide pact into the main text.
3) I also think the “43 minute read” text runs the risk of turning people away before they’ve even read the part about how they don’t have to read all of it; once you have a stable draft you could consider creating a canonical link with just the short version and a link to the full expansions. (even people who are willing to put in the effort to read a longer piece might suffer because they think they need to save it for later, instead of reading it immediately at a time when it would be helpful to a conversation).
4) Finally, I think some of the comments might reveal some confusion among readers about what parts of this are intended as universal norms for good communication vs. universal norms for clear thinking vs. a style guide for this particular website (your expansinon regarding how guideline 1 applies to idiomatic hyperbole suggests that it’s at least a little bit of the latter). If this is to be an enduring, linkable resource then it might be helped by more context on that point as well.
If you’re going to have this kind of disclaimer being this emphatic, then I’d really recommend putting everything below into a separate post. I haven’t read this post yet largely because it says “43 min read”, and from just checking that I couldn’t know that it’s secretly a short post with an optional companion volume. And given the content, I suspect you especially care to maximize the number of readers of the first part.
Or alternatively put this disclaimer in the very beginning of the post. The introduction kinda also says it, but I think that I at least kinda just skimmed the first sentences of the intro and then moved forward, thinking the intro would be normal intro fluff. But then I also pretty quickly bounced off the post due to its length, but before reaching the quoted paragraph.
I think that if the very first paragraph would be something like
then the message would be much harder to miss.
(Continued feelings of frustration and anger; I think I understand all of you and I don’t think any of you yet understand me and haven’t seen anyone visibly try. This is in no small part because I haven’t given anything in the way of detail, but the proposed edits/additions feel like … capitulating to the Twitter mob? As a Focusing handle?)
I think I’d like to note, as something like “a request you have zero obligation to meet; it’d be more doing me an active favor than fulfilling some moral duty,” that I have a wish/hope that you spend thirty seconds imagining “what if these suggestions were terrible? Like, what if Omega came down and told me ‘Duncan was right, your version is objectively and meaningfully worse, those changes caused problems’ … what model would I produce, as a result, trying to explain what was going on?”
Sorry, I didn’t realize that you’d dislike that suggestion as well. I assumed that it was primarily the suggestion of shortening the post that you were unhappy with, since the introduction section already kind of says the same thing as the proposed paragraph and I was only suggesting saying it with slightly more emphasis.
I’m trying to think about it, but finding it hard to answer, since to me moving that paragraph to an earlier point seems like a very minor change. One thought that comes to mind is “it would change people’s first impression of the post” (after all, changing people’s first impression of the length of the post is what the change was intended to achieve)… presumably in a worse way somehow? Maybe make them take the post less seriously in some sense? But I suspect that’s not what you have in mind.
It would be helpful to get a hint of the kind of axis on which the post would become worse. Like, is it something that directly affects some property of the post itself, such as its persuasiveness or impact? Or is this about some more indirect effect, like giving in to some undesirable set of norms (that’s what your mention of the Twitter mob implies to me)?
It’s more the latter; I think that it further reinforces a sense of something like “people should have to put forth zero effort; whatever it takes to get reader buy-in, no matter how silly; if your post isn’t bending over backwards to smooth the transition from [haven’t read] to [read] it’s automatically unstrategic (as opposed to maybe those readers just aren’t part of the audience),” etc.
Literally the first paragraph of the post is like, “this is mostly about a short list.” The sort of reader who sees “43 min” on a LessWrong post and then is so deterred that they don’t even read the first paragraph feels already lost to me, and going further in the direction of accommodating them (I already weakened the post substantially on behalf of the tl;dr crowd; this is already WAY capitulating) seems bad not only for the specific post but also for, like, sending the implicit social signal that yes, your terrorism is working, please continue leaning on the incentive gradient that makes it hard to take [an audience who actually gives a crap and doesn’t need to be infinitely “sold” on every little thing] for granted.
Putting a soothing “don’t worry, this is actually short, you don’t have to read something big and scary if you don’t want to!” message as literally the first line of the post sends a strong message that I Do Not Want To Send; people should just not read it if they don’t want to and my reassurances and endorsements shouldn’t be necessary.
This is why the zeroth guideline is “expect to have to put in a little work some of the time;” in the future I’ll answer such questions by linking to it but it’s a bit circular in this case when people have already demonstrated that they’re loath to even read that far.
If I were to rephrase this in my own words, it’d be something like:
“There’s a kind of expectation/behavior on some people’s behalf, where they get unhappy with any content that requires them to put in effort in order to get value out of it. These people tend to push their demand to others, so that others need to contort to meet the demand and rewrite everything to require no effort on the reader’s behalf. This is harmful because optimizing one variable requires sacrifices with regard to other variables, so content that gives in to the demand is necessarily worse than content that’s not optimized for zero effort. (Also there’s quite a bit of content that just can’t be communicated at all if you insist that the reader needs to spend zero effort on it. Some ideas intrinsically require an investment of effort to understand in the first place.)
The more that posts are written in a way that gives in to these demands, the more it signals that these demands are justified and make sense. That then further strengthens those demands and makes it ever harder to resist them in other contexts.”
Ideally I’d pause here to check for your agreement with this summary, but if I were to pause here it’d be quite possible that I’d wander off and never get around to answering your earlier prompt. So I’ll just answer on the assumption that this is close enough.
So, if Omega came to me and told me that making the change would actually make things worse, what would my model be?
Well, I’d definitely be surprised. My own model doesn’t quite agree with the above paraphrase—for one, I was one of the people who didn’t read the introduction properly, and I don’t think that I’m demanding everyone rewrite their content so as to require zero effort to read.
That said, a steelman of the paraphrase probably shouldn’t assume that all such people require all content to require literally zero effort. There can still be an underlying tendency for people to wish that they were presented with content that required less effort in general. So even if I might correctly object “hey, I don’t actually expect all content to require literally zero effort from me”, it might still be the case that I’m more impatient than I would be in a counterfactual world where I wasn’t influenced by the social forces pushing for more impatience.
Now that I think of it, I’m pretty certain that that’s actually indeed the case.
Another objection I had to the paraphrased model was that the forces pushing in the direction of impatience are just too strong to make an impact on. But while that might be the case globally, it doesn’t need to be the case locally. Even if a social incentive wasn’t strong enough to take root in the world as a whole, it could take root in rationalist spaces. And in fact there are plenty of social incentives that hold in rationalist spaces while not holding in the world in general.
It’s also relevant that these kinds of norms select for people who are more likely to agree with them. So if we consistently enforce them, that has the effect of keeping out the kinds of people who wouldn’t agree with them, making local enforcement of them possible.
So maybe one model that I could have, given Omega saying that my proposed change would have a bad impact on the post, would be something like… “Making the change would reduce the amount of effort that people needed to expend to decide whether reading this post was worth it. That would increase social pressure on other people on LW to write posts that were readable with minimum effort. While the marginal impact of this post in particular wouldn’t be that big, it’d still make it somewhat more likely that another post would give in slightly more, making it somewhat more likely that yet another post would give in slightly more, and so on. As a result of that, more impatient users would join the site (as the posts were now failing to filter them out) while more patient users would be pushed out, and this would be bad for the site in general.”
I’ll note that Logan’s writing historically gets “surprisingly” little engagement, and “it doesn’t fit well in a culture of impatience” is among my top guesses as to why.
Like, if LessWrong were 15% more patient (whatever that actually means), I suspect Logan’s writing in particular would get something like 30% more in the way of discussion and upvotes.
So my disagreement with this model is that it sounds like you’re modeling patience as a quantity that people have either more or less of, while I think of patience as a budget that you need to split between different things.
Like at one extreme, maybe I dedicate all of my patience budget to reading LW articles, and I might spend up to an hour reading an article even if its value seems unclear, with the expectation that I might get something valuable out of it if I persist enough. But then this means that I have no time/energy/patience left to read anything that’s not an LW article.
It seems to me that a significant difficulty with budgeting patience is that it’s not a thing where I know the worthwhile things in advance and just have to divide my patience between them. Rather finding out what’s worthwhile, requires an investment of patience by itself. As an alternative to spending 60% of my effort budget on one thing, I could say… take half of that and spend 5% on six things each, sampling them to see which one of them seems the most valuable to read, and then invest 30% on diving into the one that does seem the most valuable. And that might very well give me a better return.
On my model, it mostly (caveat in next paragraph) doesn’t make sense to criticize people for not having enough patience—since it’s not that they’d have less patience overall, it’s just that they have budgeted more of their patience on other things. And under this model, trying to write articles so as to make their value maximally easy to sample is the prosocial thing, since it helps others make good decisions.
I get that there’s some social pressure to just make things easy-to-digest for its own sake, and some people with principled indignation if they are forced to expend effort, that goes beyond the “budget consideration” model. But compared to the budget consideration, this seems like a relatively minor force, in my experience. Sure there are some people like that, but I don’t experience them being influential enough to be worth modeling. I think for most impatient people, the root cause of their impatience isn’t principled impatience but just having too many possible things that they could split their patience between.
… it’s pretty obviously both?
Like, each person is going to have a quantity, and some people will have more or less, and each person will need to budget the quantity that they have available.
And separately, one can build up one’s capacity to bring deliberate patience to bear on worthwhile endeavors, thus increasing one’s available quantity of patience, or one can not.
What I’m trying to say about Logan’s writing in particular is something like “it takes a certain degree of patience (or perhaps more aptly, a certain leisurely pace) to notice its value at all (at which point one will be motivated to keep mining it for more); that degree of patience is set higher than 85+% of LessWrongers know to even try offering to a given piece, as an experiment, if they haven’t already decided the author is worth their attention.”
Ah, that makes sense. I like that framing as an elegant way of combining the two.
Do you have a model of how to change that? Like, just have the site select for readers that can afford that leisurely pace, or something else?
Not really, alas.
Like, there are ideas along the lines of “reward people for practicing the skill of patience generally” and “disincentivize or at least do not reward the people practicing the skill of impatience/making impatient demands.”
But a) that’s not really a model and those aren’t really plans, and b) creating the capacity for patient engagement still doesn’t solve the problem of knowing when to be patient and when to move on, for a given piece of writing.
(Not sure I’ll be able to substantively respond but wanting to note that I’ve strong upvoted on both axes available to me; your summary up to the point where you noted you’d check in was great)
yeah for real Kaj, i’m pretty sure that was in form if not content among the best contributions to a comment thread i’ve ever seen
FWIW I like this comment much more than some of the others you’ve written on this page, because it feels like it’s gotten past the communication difficulty and foregrounds your objection.
I am a little suspicious of the word ‘should’ in the parent comment. I think we have differing models of reader buy-in / how authors should manage it, where you’re expecting it to be more correlated with “how much you want them to read the post” than I am.
This line was also quite salient to me:
There’s an ongoing tradeoff-fight of which things should be appropriate context (‘taken for granted’) in which posts. The ideal careful reader does have to be finitely sold on the appropriate details, and writing with them in mind helps make writing posts sharpen thinking. We have the distribution of potential readers that we actually have.
I want to simultaneously uphold and affirm (you writing for the audience and assumed context you want) and (that not obviously being the ‘rationalist’ position or ‘LessWrong position’). When ‘should’ comes out in a discussion like this, it typically seems to me like it’s failing to note the distinction or attempting to set the norm or obvious position (such that opposition naturally arises). [Most of the time you instead write about Duncan culture, where it seems appropriate.]
(I like and have upvoted the above)
For sure! This is part of why the post took me over a year; I did in fact work hard to strike what I felt was a workable compromise between me-and-the-culture-I’d-like-us-to-have and the-culture-we-currently-have.
Some of what’s going on with my apologetic frustration above is, like, “Gosh, I’ve already worked real hard to bridge the gap/come substantially toward a compromise position, but of course you all don’t know that/can’t be expected to have seen any of that work, and thus to me it feels like there’s a real meta question about ‘did you go far enough/do an effective-enough job’ and it’s hard to make visceral to myself that y’all’s stance on that question is (probably) different than it would be if you had total extrospective access to my brain.”
I do not at all mean to criticize you deeply. this is a great post. I just want to be able to use it in conversations on discord where people are new to the concept with somewhat less difficulty. I linked it somewhere and got the immediate response “it opens with a quote from that one lady, close”, and another who was approximately like “geez that’s long, can you summarize”. Yes, I know you’d wish that the sanity waterline was higher than that; and you did do a great job building this ladder to dip into the sanity so the sanity can climb the ladder. I just wanted to have a link that would clearly signal “you don’t have to read the rest if you decide the intro isn’t worth it”. It’s a small edit, and your existing work into making the thing doesn’t make it impossible to change it further. honestly when I first posted my comment I thought I was being constructive and friendly.
Having read to this point in the thread, part of me wants this post to be called “Basics Of Intermediate Rationalist Discourse”.
Just copy paste the bullet points.
Why in the world would we want to optimize for engagement with people like that…? Excluding those who react in such a way seems to me to be a good thing.
This is my sense as well, but this is in large part the core of the cultural disagreement, I think?
Like, back in the early 2000′s, there was a parkour community centered around a web forum, NCparkour.com. And there was a constant back-and-forth tension between:
a) have a big tent; bring in as many people as possible; gradually infect them with knowledge and discipline and the proper way of doing things
b) have standards; have boundaries; make clear what we’re here to do and do not be particularly welcoming or tolerant of people whose default way of being undermines that mission
My sense is that, if you’re an a-er, the above mentality seems like a CLEAR mistake, à la “why would you drive away someone who’s a mere two or three insights away from being a good and productive member of our culture??”
And if you’re a b-er, the above mentality is like, yep, two or three insights away from good is a vast and oft-insurmountable distance, people generally don’t change and even if they do we’re not going to be able to change them from the outside. Let’s not dilute our subculture by allowing in a bunch of “”voters”″ who don’t even understand what we’re trying to do, here (and will therefore ruin it).
My sense is that LessWrong has historically been closer to a than to b, though not so close to a that, as a b-er, I feel like it’s shooting itself in the foot. More like, just failing to be the shining city on the hill that it could be.
(Also, more of a side note, but: the quoted text is not from J.K. Rowling.)
Have another terrible suggestion!
There’s a spectrum going “section title, phrase with tooltip, phrase that expands on click, hyperlink, jargon capitalized to suggest you should google it, text” where each step gives more justification to take the words at face value. Making a ~15-line suzzary of the post into a table of contents at its very start would harness this.
(noting, with substantial apology and a general lack of pride/endorsement, that I won’t be able to reply to this with anything coherent because all I feel is anger/frustration)
It would be less frustrating if it weren’t likely that these criticisms are just replacing a bunch of counterfactual criticisms of the form “but what about X?” (where X is addressed in the follow-up post, but no one clicks through to read a whole separate post just to find out the nuances behind the original list of ten). You can’t win!
Just a small piece of feedback. This paragraph is very unclear, and it brushes on a political topic that tends to get heated and personal.
I think you intended to say that the norms you’re proposing are just the basic cost of entry to a space with higher levels of cooperation and value generation. But I can as easily read it as your norms being an arbitrary requirement that destroys value by forcing everyone to visibly incur pointless costs in the name of protecting against a bogeyman that is being way overblown.
This unintended double meaning seems apt to me: I mostly agree with the guidelines, but also feel that rationalists overemphasize this kind of thing and discount the costs being imposed. In particular, the guidelines are very bad for productive babbling / brainstorming, for intuitive knowledge transfer, and other less rigorous ways of communicating that I find really valuable in some situations.
The best frame for this is that better world models and better thinking is not free. It does require paying actual costs, of only in energy. Usually this cost can be very cheap, but things can get expensive in certain problems. Thus, costs are imposed for better reasoning by default.
Also, I think that babbling/brainstorming is pretty useless due to the high amount of dimensions for a lot of problems. Babbling and brainstorming scales as 2^n, with n being the number of dimensions, and for high input values of N, babbling and brainstorming is way too expensive. It’s similar to 2 of John Wentworth’s posts that I’ll link below, but in most real problems, babbling and brainstorming will make progress way too slowly to be of any relevance.
This is also why random exploration is so bad compared to focused exploration.
Links below for why I believe in the idea that babbling/brainstorming is usually not worth it:
Strong disagree; like, strong enough that I will be blunter than usual and say “this is just false.” If you project a bunch of stuff onto the guidelines that isn’t actually there in the text, then yeah, but.
All of Julia Galef, Anna Salamon, Rob Bensinger, Scott Garrabrant, Vaniver, Eliezer Yudkowsky, Logan Brienne Strohl, Oliver Habryka, Kelsey Piper, Nate Soares, Eric Rogstad, Spencer Greenberg, and Dan Keys have engaged in productive babbling/brainstorming, intuitive knowledge transfer, and other less rigorous ways of communicating on the regular; the only difference is that they take three seconds to make clear that they’re shifting into that mode.
You didn’t address the part of my comment that I’m actually more confident about. I regret adding that last sentence, consider it retracted for now (I currently don’t think I’m wrong, but I’ll have to think/observe some more, and perhaps find better words/framing to pinpoint what bothers me about rationalist discourse).
I’m not sure what the suggestion, question, or request (in the part you’re more confident about) was. Could you nudge me a little more re: what kind of response you were hoping for?
It seems to me that you are attempting to write a timeless, prescriptive reference piece. Then a paragraph sneaks in that is heavily time and culture dependent.
I’m honestly not certain about the intended meaning. I think you intent mask wearing to be an example of a small and reasonable cost. As a non-american, I’m vaguely aware what costco is, but don’t know if there’s some connotation or reference to current events that I’m missing. And if I’m confused now, imagine someone reading this in 2030...
Without getting into the object-level discussion, I think such references have no place in the kind of post this is supposed to be, and should be cut or made more neutral.
[Thought experiment meant to illustrate potential dangers of discourse policing]
Imagine 2 online forums devoted to discussing creationism.
Forum #1 is about 95% creationists, 5% evolutionists. It has a lengthy document, “Basics of Scientific Discourse”, which runs to about 30 printed pages. The guidelines in the document are fairly reasonable. People who post to Forum #1 are expected to have read and internalized this document. It’s common for users to receive warnings or bans for violating guidelines in the “Basics of Scientific Discourse” document. These warnings and bans fall disproportionately on evolutionists, for a couple reasons: (a) evolutionist users are less likely to read and internalize the guidelines (evolutionist accounts tend to be newly registered, and not very invested in forum discussion norms) and (b) forum moderators are all creationists, and they’re far more motivated to find guideline violations in the posts of evolutionist users than creationist users (with ~30 pages of guidelines, there’s often something to be found). The mods are usually not very interested in discussing a warning or a ban.
Forum #2 is about 80% creationists, 20% evolutionists. The mods at Forum #2 are more freewheeling and fun. Rather than moderating harshly, the mods at Forum #2 focus on setting a positive example of friendly, productive discourse. The ideological split among the mods at Forum #2 is the same as that of the forum of the whole: 80% creationists, 20% evolutionists. It’s common for creationist mods to check with evolutionist mods before modding an evolutionist post, and vice versa. When a user at Forum #2 is misbehaving, the mods at Forum #2 favor a Hacker News-like approach of sending the misbehaving user a private message and having a discussion about their posts.
Which forum do you think would be quicker to reach a 50% creationists / 50% evolutionists split?
I think this thought experiment isn’t relevant, because I think there are sufficient strong disanalogies between [your imagined document] and [this actual document], and [the imagined forum trying to gain members] and [the existing LessWrong].
i.e. I think the conclusion of the thought experiment is indeed as you are implying, and also that this fact doesn’t mean much here.
Well, the story from my comment basically explains why I gave up on LW in the past. So I thought it was worth putting the possibility on your radar.
I thought about this a bit more, and I think that given the choice between explicit discourse rules and implicit ones, explicit is better. So insofar as your post is making existing discourse rules more explicit, that seems good.
I want to say that I really like the Sazen → expansion format, and I like the explanation → ways you might feel → ways a request might look format even more.
1 to 4 and 6 to 9 I just straightforwardly agree with.
My issue with 5 should properly be its own blog post but the too-condensed version is something like, those cases where the other person is not also trying to converge on truth are common enough and important enough that I don’t blame someone for not starting from that assumption. Put another way, all of the other rules seem like they work even if the other person isn’t doing them, or at least fail gracefully. Following an unarticulated version of rule 5 has ever failed me badly before. I don’t know exactly what would falsify this claim and acknowledge that it might just be one or two highly salient-to-me examples, but if at the end of the conversation someone is going to smile at you and hand you a pen and ask you to sign something then I don’t think assuming your interlocutor is also aiming for convergence on truth is a good idea.
It is possible that kind of conversation is not covered under what you’re thinking of by a rational discourse.
With 10, I think I’m quibbling over phrasing. Holding to the absolute highest standard feels like it translates to never doing it, which is correct on the margin.
The most interesting disagreement I have is with 0. Years after adopting them, many of the changes I’ve made to how I communicate have become close to free. It is harder now to do the wrong thing, to make an ad hominem or to say I’m a hundred percent sure, in much the same way that I have to make an effort to stop myself from looking at my card in Hanabi (because when I draw a card and add it to my hand, the vast majority of card games have me look at that card!) or the way I have to make an effort to fall off a bicycle when my body knows how to balance and keep moving. I think the 0th guideline is a useful nudge in the right direction, but suspect that if I stick to these guidelines for the next five years, I will feel like it takes less energy to follow them than to break them.
I wholeheartedly agree that following 5 leaves you vulnerable to defection; the claim is that (especially within a subculture like LessWrong) the results of everybody hunting stag on this one are much better than the results of everyone choosing rabbit; you will once in a while get taken advantage of for an extra minute or two/a few more rounds of the back-and-forth, but the base rate of charity in the water supply goes WAY up and this has a bunch of positive downstream effects and is worth it on net in expectation (claim).
(This is elaborated on a good bit in the expansion of 5 if you haven’t read it and are curious. I’d love to be tagged in an objection post, b/c I’d probably engage substantially in the comments.)
10 should maybe be toned back a bit!
I strongly agree with your take on 0; this is hit pretty hard in the Sapir-Whorf piece from a couple of days ago (the thing feels effortful when it’s not reflecting your inner thought processes but using the language can update the inner thought processes, and speaking in a way that reflects the inner thought processes is subjectively ~0% extra effort). But if we’re wanting to gain new skill and not just stay as-good-at-discourse as we currently are, we’re each going to need to be nonzero trying on some axis.
I had read the expansions. We might be in practical agreement on 5. I would say if you’re debating in the comments of a Less Wrong thread, following 5 is positive expected value. You’ll avoid escalations that you’d otherwise fall into, and being defected against won’t cost you too much. It stands out to me because other
rulesguidelines (say, 2 and 8) I would be comfortable abidingholding myself to even if I knew my interlocutor wasn’t going to follow them. It’s when you’re having higher stakes discussions that leaving yourself open in good faith can go badly. (Edited when it was pointed out to me that they’re always referred to as guidelines, never rules. This is in fact a useful distinction I let blur.)
I agree that when starting out 0 is likely to require energy, like, more energy than it feels like it should to do something like this. “Expect good discourse to require energy until you become very used to it, then it should feel natural” is a weaker message but is how I am interpreting it. (I am trying and failing to find the sequences post about rules phrased as absolutes that aren’t actually absolutes, such that it stays your hand until need actually weighs you down.)
I will be sure to direct your attention to the objection post once I write it. It is partially written already and did not start life as an objection, but it does apply and will be finished. . . someday.
Since I haven’t said so yet, thank you for writing this and giving me a link to reference!
Er, this is maybe too nitpicky, but it’s pretty important to me that these are guidelines, not rules (with expansion on what a guideline means); I worked hard to make sure that the word “rules” appeared nowhere in the text outside of one quote in the appendix.
You are correct and that is an important distinction I blurred in my own head, thank you.
There is no value in framing good arguments as prescriptive, and delayed damage in cultivating prescriptive framings of arguments. A norm is either unnecessary when there happens to be agreement, or exerts pressure to act against one’s better judgement. The worst possible reason for that agreement to already be there is a norm that encourages it.
When someone already generally believes something, changing their mind requires some sort of argument, it probably won’t happen for no reason at all. That is the only burden of proof there ever is.
Thus a believer in a guideline sometimes won’t be convinced by a guideline violation that occurs without explanation. But a person violating a guideline isn’t necessarily working on that argument, leaving reasons for guideline violation a mystery.
I think you’re missing the value of having norms at the entry points to new subcultures.
LessWrong is not quite as clearly bounded as a martial arts academy; people do not agree to enter it knowing that there will be things they have to do (like wearing a uniform, bowing, etc).
And yet it is a nonstandard subculture; its members genuinely want it to be different from being on the rest of the internet.
Norms smooth that transition—they help someone who’s using better-judgment-calibrated-to-the-broader-internet to learn that better-judgment-calibrated-to-here looks different.
When you come to know something because there is a norm to, instead of by happening to be in the appropriate frame of mind to get convinced by arguments, you either broke your existing cognition, or learned to obscure it, perhaps even from yourself when the norm is powerful enough.
I want people here to be allowed honesty and integrity, not get glared into cooperative whatever. This has costs of its own.
There needs to be some sort of selection effect that keeps things good, my point is that cultivation of norms is a step in the wrong direction. Especially norms about things more meaningful than a uniform, things that interact with how people think, with reasons for thinking one way or another.
It’s hard to avoid goodharting and deceptive alignment. Explicitly optimizing for obviously flawed proxies is inherently dangerous. Norms take on a life of their own, telling them to stop when appropriate doesn’t work very well. They only truly spare those strong enough to see their true nature, but even that is not a prerequisite for temporarily wielding them to good effect.
This is stated as an absolute, when it is not an absolute. You might want to take a glance at the precursor essay Sapir-Whorf for Rationalists, and separately consider that not everyone’s mind works the way you’re confidently implying All Minds Work.
You’re strawmanning norms quite explicitly, here, as if “glared into cooperative whatever” is at all a reasonable description of what healthy norms look like. You seem to have an unstated premise of, like, “that whole section where Duncan talked about what a guideline looks like was a lie,” or something.
I hear that that’s your position, but so far I think you have failed to argue for that position except by strawmanning what I’m saying and rose-ifying what you’re saying.
Agree; I literally created the CFAR class on Goodharting. Explicitly optimizing for obviously flawed proxies is specifically recommended against in this post.
This is the closest thing to a-point-I’d-like-to-roll-around-and-discuss-with-you-and-others in your comments above, but I’m going to be loath to enter such a discussion until I feel like my points are not going to be rounded off to the dumbest possible neighbor of what I’m actually trying to say.
I think the X’s and Y’s got mixed up here.
Otherwise, this is one of my favorite posts. Some of the guidelines are things I had already figured out and try to follow but most of them were things I could only vaguely grasp at. I’ve been thinking about a post regarding robust communication and internet protocols. But this covers most of what I wanted to say, better than I could say it. So thanks!
Oh, thanks; fixed
...is a very difficult task even by standards of “good discourse requires energy”. To present anything but a strawman in such case may require more time than the general discussion—not necessarily because your model actually is a strawman but because you’d need to “dot many i’s and cross many t’s”—I think that’s the wording.
(ETA: It seems to me like it is directly related to obeying your tenth guideline.)
I think that it’s fine (and well within the spirit of these guidelines) to reply with something like:
“No, but I can give you my strawman and you can tell me where I’m missing you?”
That’s an ingenious solution! I still feel like there’s some catch here but can’t formulate it. Maybe because it’s way past midnight here and I should just go to sleep.
This is great. I notice that other people have given caveats and pushback that seems right to me but that I didn’t generate myself, and that makes me nervous about saying I endorse it. But I get a very endorse-y feeling when I read it, at any rate.
(I have a vague feeling there was something that I did generate while reading? But I no longer remember it if so.)
Another feeling I get when I read it is, I remember arguments I’ve had in rat spaces in the past, and I want to use this essay to hit people round the head with.
This one feels awkward to me because I don’t really know where to draw the line.
Or like, I think I know where feels natural to draw the line to me, and I kind of expect most people to agree that’s a sensible place most of the time. (I expect that without really checking—maybe it’s the case that if I expect wrong, I would have noticed something by now? But I’m not sure.) But if I imagine someone disagreeing with me, and trying to explain to them or an audience why I draw the line there, I have trouble coming up with more than “okay, but like, come on”.
(Oh, and come to think of it, I think I have in fact had arguments that seemed to hang on where to draw the line?)
Consider this quote from HPMOR:
But what’s more precisely the case is that Dumbledore has a memory of reaching the trophy room and seeing something that looked exactly like an unconscious Draco. The most obvious explanation for this memory is that he reached the trophy room and then photons entered his retinas and blah blah blah, and the most obvious explanation for that involves Draco being unconscious in the trophy room.
(Maybe in the story he did more than just see that, but I think that’s not particularly relevant here.)
But why is it okay to say “I saw Draco unconscious”, and we don’t have to say “I saw what looked exactly like an unconscious Draco” or even “I have a memory of seeing what looked exactly like an unconscious Draco”? It feels right to me, but… well, sometimes people present their inferences as observations and I guess that feels right to them too?
The vague solution I’m currently thinking about is, some inferences are not in question. In the story, it was in question whether Hermione had ever been in the trophy room, and so “unsafe” to infer “Hermione had already left” from “did not see Hermione”. It was not in question whether Draco had been unconscious in the trophy room, and so “safe” to infer “saw Draco” from “saw what looked exactly like Draco”.
This doesn’t feel entirely satisfactory. For one thing it has lowest-common-denominator dynamics, I don’t want someone to be able to start pretending to question everything and be able to bog down all discussion, but then maybe at some point I should just go “not worth it to me to continue this” and that’s fine?
For another it doesn’t really help you catch unexamined assumptions that turn out false. Harry might have let “Hermione had already left” slide because he assumed she was there, and that seems okay according to this—a wrong result, but not because he acted badly by this guideline as modified by that vague solution. Maybe that’s fine if we accept that “catch unexamined assumptions” isn’t really the point here, but idk, I think it’s at least partly the point? And also I still want to say “okay but Dumbledore didn’t observe Hermione there or Hermione leaving”, even if it’s the case that Hermione was there and had left and this is not in question.
So I don’t consider this resolved, but that feels maybe directionally correct? And in particular, if someone had said “but we should double check Draco was actually there—Headmaster, did you check that the thing you saw was in fact Draco? Do we have some way to rule out that you’ve been false memory charmed?” and the answers to those questions were no, then I think I’d be okay with this guideline suggesting “Dumbledore should switch to “I remember seeing something that looked like...”″.
I think one of the goals of the overall piece was to convey the meta-norm of, like … being open to requests to slide in a direction?
So in my world, Dumbledore was making no mistakes when he said “I saw Draco unconscious,” because he was in a standard frame/conforming to ordinary word usage. Harry then made a bid for drawing the boundary between “what we’re going to count as inference” and “what we’re going to count as observation” in a lower, more fundamental place, and Dumbledore consented, and the conversation shifted into that new register.
I don’t think someone’s doing something wrong if they say “I saw you make a super angry face!” as long as, if their conversational partner wants to disagree that the face was angry, they’re willing to back up and say, okay, here’s more detail on what I observed and why I concluded it meant “angry.”
(Or, in other words, I agree with what you’re saying toward the end of your comment.)
Genuine thanks for making this and actually posting the names. My monkey brain has decided these lists are a thing that happen now and I really need to make being on one a new life goal.
The second arrow seems like it’s going in the wrong direction, in that the third statement seems to be making more inferences than the second one. Mostly just because “They’re doing it on purpose.” seems too strong (and also not in the spirit of 10). E.g. they might not have bothered reading the accurate info and still honestly believe their points.
I agree it’s going backwards in that it’s making more inferences than the second, but it’s also exposing those inferences/has the virtue of being more openly cruxy and interface-able. Plausible I should tweak the example anyway, though; thank you for highlighting this one.
When you say “straightforwardly false”, do you intend to refer to any particular theory of truth? While I have long known of different philosophical concepts and theories of “truth”, I’ve only recently been introduced to the idea that some significant fraction of people don’t understand the words “true” and “false” to refer at-least-primarily to correspondent truth (that is, the type of truth measured by accurate reflection of the state of the world). I am not sure if that idea is itself accurate, nor whether you believe that thing about some/many/most others, or what your individual understanding of truth is, so I find it hard to interpret your use of the word “false”.
What I mean by false is not something I have pinned down in a deeply rigorous philosophical sense. But here are some calibrating examples:
Everybody loves Tom Hanks!
The sky is often green.
God does not play dice with the universe.
This is the best book ever written. (← It is an unfortunate side effect of this sort of common hyperbole that in the rare case when one actually means to make this claim literally, one has to say many more words to make that clear.)
I’m certain we will be there by 5PM.
There’s absolutely no other explanation for X.
You’re not listening to me. (← Here there is both the trivial and somewhat silly layer in which you’re clearly expecting the person to parse the sentence, but also the deeper layer in which you are asserting as if fact something about the other person’s internal experience that you do not and cannot know (as opposed to having high credence in a model).)
Absolutes are a pretty good way to achieve the “straightforwardly false” property in a hurry, and I suspect they make up at least a plurality of instances in practice, if not a straight majority.
In short, though: I don’t expect that I’m capable of catching all of the instances of straightforward falsehoods around me, or that I could describe a detection algorithm that would do so. But I’ve got detection algorithms that catch plenty anyway; the airwaves are full of ’em.
If you think this post would be stronger with more real-world examples of each guideline (either failures to follow it, or stellar examples of abiding by it), then please keep your radar open for the next few days or weeks, and send me memorable examples. I am not yet sure what I’ll do with those, but having them crowdsourced and available is better than not having them at all, or trying to collect them all myself.
Also: I anticipate substantial improvements to the expansions over time, as well as modest improvements to the wording of each of the short/relatively pithy expressions of the guidelines. I’m super interested in those.
I’m less expecting to be convinced by a bid to completely nix one of these, add a brand-new one, or swap one, but that wouldn’t shock me, so feel free to make those, too.
> Aim for convergence on truth, and behave as if your interlocutors are also aiming for convergence on truth.
It’s not clear to me what the word “convergence” is doing here. I assume the word means something, because it would be weird if you had used extra words only to produce advice identical to “Aim for truth, and behave as if your interlocutors are also aiming for truth”. The post talks about how truthseeking leads to convergence among truthseekers, but if that were all there was to it then one could simply seek truth and get convergence for free. Apparently we ought to seek specifically convergence on truth, but what does seeking convergence look like?
I’ve spent a while thinking on it and I can’t come up with any behaviours that would constitute aiming for truth but not aiming for convergence on truth, could you give an example?
I think this wording does need to be changed/updated, since it’s not clear. I’m trying to post-hoc introspect on why “convergence” felt good (i.e. these were not my explicit thoughts at the time) and what’s coming up is:
A different set of actions will come out of me if I’m trying to get both of us to successfully move toward truth, from each of our respective current positions, than if I am solely trying to move toward truth myself, or solely trying to force you to update.
So “aim for convergence on truth” carries with it a connotation of “taking a little bit of responsibility for the pairwise dynamic, rather than treating the conversation as a purely egocentric procedure motivated by my own personal desire to be personally less wrong.”
I propose another discussion norm: committing to being willing to have a crisis of faith in certain discussions and if not, de-stigmatizing admitting when you are, in fact, unwilling to entertain certain ideas or concepts, and participants respecting those.
Seems good, but seems like probably not a basic norm? Feels more advanced than “foundational.”
As a matter of pure category, yeah, it’s more advanced than “don’t make stuff up”.
I usually see these kinds of guides as an implicit “The community is having problems with these norms”
If you were to ask me “what’s the most painful aspect about comments on lesswrong?”, it’s reading comments that go on for 1k words a piece and neither commenter ever agrees, and it’s probably the most spooky part for me as a lurker, and made me hesitant to participation.
So I guess I misread the intent of the post and why it was boosted? I dunno, are these not proposals for new rules?
Edit: Sorry, I read a bit more in the thread and these guidelines aren’t proposals for new rules.
Since that’s the case, then I guess I just don’t understand what problem is being solved. The default conversational norms here are already high-quality, it’s just really burdensome and scary to engage here.
And in effort of maintaining my proposed norm: you’d have to make an arduously strong case (either via many extremely striking examples or lots of data with specific, less-striking examples) to convince me that this actually makes the site a better place to engage than what people seem to be doing on their own just fine.
Second Edit: I tried to follow the “Explain, don’t convince” request in the rule here. Please let me know if I didn’t do a good job.
Third edit: some wording felt like it wasn’t making my point.
If you think this is a consensus guide, I think you should add it to a wiki page. I am happy to do so.
If people think that shouldn’t be the case, I’d ask what the wiki is for other than for broad consensus opinions.
I think it’s not quite yet a consensus guide; there’s some reasonably large disagreement around 5 and 10 especially (and a little bit around 6) that I would prefer to mull over and turn into updates before adding to the wiki.
It seems like ‘social status’ is mentioned exactly once:
Which really seems like a key point that should be further reinforced by other sections, considering the topic discussed and your expressed desires, not tucked away obliquely in an isolated quote box.
I think you are intending something obvious to be implied by your comment but I’m not sure what it is
The narrower claim of:
“As far as I can tell, the overwhelming majority of people have a morality that grounds out in social status.”
seems straightforwardly understandable, at least to me.
Are you confused by the meaning or implications?
By the way, I almost never write with a Straussian intention since only a tiny subset of LW readers are sufficiently savvy and motivated to dig through multiple layers of obfuscation.
Presumably Duncan is not entirely immune to status considerations, so it’s advantageous to beat around the bush a bit, but it doesn’t seem like his intention was to hide a deeper meaning within either.
i mean given the belief that social status is very relevant to a lot of people, what would you say differently if you were writing the post?
Er, note that that example was about someone being overconfident about status being important, and that the recommendation was that they note that their conclusion lives inside their head and might not be real.
I do not think that the hypothesis that everyone’s morality is based on status is a key point or should have more time here. I think it’s usually a curiosity stopper and a Red Herring and we-as-a-community already talk about it too much given how little detail our median member has in their mental model.
I agree ‘everyone’ is overconfidence.
Though ‘overwhelming majority’ seems reasonable. If by that we mean 90%+ of the population, every community I’ve ever observed for more than a hour would likely fall into that bucket.
‘Pecking orders’ and so on.
Online communities probably too, though as interactions are primarily text based I’m less confident in that regard.
I don’t think my views are unique either, anyone who has emotional/social intelligence above a certain threshold and who’s also been in the middle or near the top of any status hierarchy would likely be able to sniff out what’s what fairly quickly.
Of course many still sometimes try to hide and disguise their behaviour with something less objectionable then naked social status competition, but when the disguises are paper-thin it doesn’t really have any potential to fool the experienced.
I have a minor nitpick, which I would not normally comment on, but I think this post deserves to be more polished and held to a higher standard than regular posts.
my nitpicking is not about the idea, or the paragraph, but rather the specific point of those two claims being “exact opposites”.
There are examples of situations where “a very real chance” is a major and useful update, even when it makes one person think of 20% and the other of 80%. For example, if a team of scientists agree that “there is a very real chance” the sun is going to explode within the next month. It is useful from that point to get a more precise estimate than something that vague, but I would not call 20% and 80% estimates “exactly opposite claims”, as they are both major updates in the same direction from e.g. 0.00001%.
for a basics, this post is long, and I have a lot of critique I’d like to write that I’d hope to see edited. However, this post has been posted to a blogging platform, not a wiki platform; it is difficult to propose simplifying refactors for a post. I’ve downvoted for now and I think I’m not the only one downvoting, would be curious to hear reasons for downvotes from others and what would reverse them. would be cool if lesswrong was suddenly a wiki with editing features and fediverse publishing. you mention you want to edit; looking forward to those, hoping to upvote once edited a bit.
unrelated, did you know lesswrong has a “hide author until hovered” feature that for some reason isn’t on by default with explanation? :D
This post is not long for a basics post unless you ignore the part that explicitly says ~”stop reading here; the rest is for reference only at the time of specific need.”
Up until that point, it’s ~1500 words, which is not too many words for “these are the central core Things To Attend To if you want good discourse.” The core of the post is a list you can reach with two clicks (one into the essay and one in the sidebar) that contains chunks that are almost all only two lines apiece.
I think that criticism along the lines of “not brief enough” is very, very backward in this case. *shrug
it says forty three minute read on this link that you want me to use to introduce people to a concept. trivial inconveniences matter for raising water lines
This post is not long for a basics post unless you ignore the part that explicitly says ~”stop reading here; the rest is for reference only at the time of specific need.”
Up until that point, it’s ~1500 words, which is not too many words for “these are the central core Things To Attend To if you want good discourse.” The core of the post is a list you can reach with two clicks (one into the essay and one in the
sidebarfirst two sentences) that contains chunks that are almost all only two lines apiece.
I think that criticism along the lines of “not brief enough” is very, very backward in this case. *shrug
If you would like this to be a productive exchange, let me know. Currently I don’t think you want a productive exchange; my (admittedly poor but) best guess is that you simply want me to say “oh gosh, you’re right!” which I can’t say because I don’t think it’s true.
Uh, I think their point that the site UI is ignoring the part that explicitly says “stop reading here”, and thus your “unless you ignore the part” is irrelevant to the post’s perceived length, and that it would be reasonable for a reader to filter whether or not they read the post by something on the second line they see, and not filter posts based on a sentence that’s 1500 words in. [IMO your stronger defense is that the introductory paragraphs try to make clear that the necessary payload of the post is small and frontloaded.]
I’m dinging the gears to ascension on writing clarity, here, but… were you doing a bit, or should I also be dinging you on reading comprehension / ability to model multiple hypotheses? [Like, to be clear, I don’t think your first response was clearly bad, but it felt like something had gone wrong when you repeated the same first line in your second response.]
I do not know what to say to help someone who sees “43 min read,” does not even read the first paragraph, and makes boldly wrong assumptions about what a post is asking of them, to the point of chastising the author and saying “I’m downvoting for this [turns out to be wrong] reason.”
Like, in the future I will respond to such interactions by saying “Here’s a link to a short list of guidelines, of which the zeroth is that, if you are expecting to put in no effort at all, you will not succeed in doing discourse that’s any better than what happens out in the broader internet, and the sixth of which is to be careful with your leaps to conclusions.”
But there was a sort of circular self-protective “I’m going to confidently judge things on surface characteristics and put in no work” mentality that I didn’t know how to break through and didn’t want to engage with and indeed nonzero wrote this post specifically so I could copypasta away from in the future, and that didn’t get any better when they completely ignored my first reply to say “nuh-uh, none of that matters because LessWrong says 43 minutes at the top and that means that my choices are zero minutes or 43 minutes.”
(The above being a strawman that I do not endorse; I am trying to share something like “what my brain heard, and thus what I had to work with in dialogue with my brain.”)
I repeated the thing which had not been heard the first time in lieu of saying many much less productive things; I did not trust myself to find a way to engage charitably; I had nothing more productive to say that didn’t rely on gears being willing to do some very basic moves that I didn’t think I could persuade them to do, given their starting point.
It wasn’t a bit, it was “this whole thing is a trap, and really fucky frame warfare, and it’s putting the burden of proof in all the wrong places, and it’s the very thing that the whole post is trying to get me away from in the future, how about I just reiterate my point without escalating and leave.” Like, I had exceeded my personal competence, and was trying to do the “don’t keep adding words if they’re not going to be good” virtue.
My model of gears to ascension, based on their first 2 posts, is that they’re not complaining about the length for their own sake, but rather for the sake of people that they link this post to who then bounce off because it looks too long. A basics post shouldn’t have the property that someone with zero context is likely to bounce off it, and I think gears to ascension is saying that the nominal length (reflected in the “43 minutes”) is likely to have the effect of making people who get linked to this post bounce off it, even though the length for practical purposes is much shorter.
I think that people who are actually going to link this to someone with zero context are going to say “just look at the bulleted list” and that’s going to 100% solve the problem for 90% of the people.
I think that the set of people who bounce for the reason of “deterred by the stated length and didn’t read the first paragraph to catch the context” but who would otherwise have gotten value out of my writing is very very very very very small, and wrong to optimize for.
I separately think that the world in general and LW in particular already bend farther over backwards than is optimal to reach out to what I think of in my brain as “the tl;dr crowd.” I’m default skeptical of “but you could reach these people better if you X;” I already kinda don’t want to reach them and am not making plans which depend upon them.
Yeah, that is definitely fair