Sapir-Whorf for Rationalists

Casus Belli: As I was scanning over my (rather long) list of essays-to-write, I realized that roughly a fifth of them were of the form “here’s a useful standalone concept I’d like to reify,” à la cup-stacking skills, fabricated options, split and commit, and sazen. Some notable entries on that list (which I name here mostly in the hope of someday coming back and turning them into links) include: red vs. white, walking with three, setting the zero point[1], seeding vs. weeding, hidden hinges, reality distortion fields, and something-about-layers-though-that-one-obviously-needs-a-better-word.

While it’s still worthwhile to motivate/​justify each individual new conceptual handle (and the planned essays will do so), I found myself imagining a general objection of the form “this is just making up terms for things,” or perhaps “this is too many new terms, for too many new things.” I realized that there was a chunk of argument, repeated across all of the planned essays, that I could factor out, and that (to the best of my knowledge) there was no single essay aimed directly at the question “why new words/​phrases/​conceptual handles at all?”

So … voilà.

(Note that there is some excellent pushback + clarification + expansion to be found in the comments.)


Core claims/​tl;dr

  1. New conceptual distinctions naturally beget new terminology.

    Generally speaking, as soon as humans identify a new Thing, or realize that what they previously thought was a single Thing is actually two Things, they attempt to cache/​codify this knowledge in language.

    Subclaim: this is a good thing; humanity is not, in fact, near the practical limits of its ability to incorporate and effectively wield new conceptual handles.

  2. New terminology naturally begets new conceptual distinctions.

    Alexis makes a new distinction, and stores it in language; Blake, via encountering Alexis’s language, often becomes capable of making the same distinction, as a result. In particular, this process is often not instantaneous—it’s not (always) as simple as just listening to a definition. Actual practice, often fumbling and stilted at first, leads to increased ability-to-perceive-and-distinguish; the verbal categories lay the groundwork for the perceptual/​conceptual ones.

  3. These two dynamics can productively combine within a culture.

    Cameron, Dallas, and Elliot each go their separate ways and discover new conceptual distinctions not typical of their shared culture. Cameron, Dallas, and Elliot each return, and each teach the other two (a process generally much quicker and easier than the original discovery). Now Cameron, Dallas, and Elliot are each “three concepts ahead” in the game of seeing reality ever more finely and clearly, at a cost of something like only one-point-five concept-discovery’s worth of work.

    (This is not a metaphor; this is in fact straightforwardly what has happened with the collection of lessons learned from famine, disaster, war, politics, and science, which have been turned into words and phrases and aphorisms that can be successfully communicated to a single human over the course of mere decades.)

  4. That which is not tracked in language will be lost.

    This is Orwell’s thesis—that in order to preserve one’s ability to make distinctions, one needs conceptual tools capable of capturing the difference between (e.g.) whispers, murmurs, mumbles, and mutters. Without such tools, it becomes more difficult for an individual, and much more difficult for a culture or subculture, to continue to attend to, care about, and take into account the distinction in question.

  5. The reification of new distinctions is one of the most productive frontiers of human rationality.

    It is not the only frontier, by a long shot. But both [the literal development of new terminology to distinguish things which were previously thought to be the same thing, or which were previously invisible] and [other processes isomorphic to that process] are extremely relevant to the ongoing improvement of our mental tech. c.f. Eliezer’s sequences, which could reasonably be described as a tool whose main purpose is to cause some 50-500 new explicit concepts to be permanently ingrained in the reader’s lexicon.


Background I: NVC

There is a communication paradigm called “Nonviolent Communication,” often abbreviated as NVC.

NVC makes certain prescriptions about language, forbidding some words and phrases while encouraging others. If you set out to learn it, much of what you will do in the early stages is likely to feel like fiddling with your speech. Applying an effortful translation filter, as if you were taking English sentences and laboriously converting them into Russian using a dictionary and a grammar handbook.

For instance, a simple-seeming sentence like “You betrayed me” is just straightforwardly not expressible in NVC. If you were talking to a chatbot that had been programmed to only understand NVC, it would be literally incapable of parsing “You betrayed me.” It would receive that string as incomprehensible gibberish, and throw up an error message.

You can convey to the chatbot your underlying state of mind—the beliefs, experiences, observations, and emotions that led you to want to express “You betrayed me.” But in order to do that, you’ll have to say a lot of other words, in what often feels (to a beginner) like a stupidly obtuse, rigid, ridiculous, ritualistic game of let’s-pretend-we-don’t-understand-what-words-mean.

“Fine. You betrayed me, but since we’re playing this idiotic game where I’m not allowed to say that, what I’ll say is that, uh, when I saw you do X, despite the fact that you agreed to—sorry, despite the fact that I recall you agreeing to do Y, then I felt betr—what? Seriously? Okay, fine, I felt angry and manipul—FINE, I felt ANGRY and HURT and USED and I DID NOT LIKE IT.”

This is often deeply dissatisfying. There is a thing that is meant by “you betrayed me,” a certain kind of social and emotional impact that the sentence is trying to have. And the NVC version fails to convey that thing. It fails to convey that thing by design—is deliberately structured so as to make that thing inexpressible, and that (exact) impact unachievable.

Which means that many people abandon it, more or less immediately, because the actual thing they want is that impact, and therefore NVC is not the tool they need.

But an interesting thing happens, if one practices NVC for long enough (which for me was about four or five total hours, across a couple of weeks).

What happens is that one begins to feel unease around sentences like “You betrayed me.” There begins to be a tiny flinch, a note of something not-quite-right. It becomes easier to notice, and then harder to ignore—that there’s something wrong with that sentence, and sentences like it. Something wrong in truth, not just something not-in-compliance—something that more normal habits of thought and speech gloss over.

To be clear, NVC’s recommended replacement sentences aren’t flawless, either, which is why I do not often explicitly use NVC. But they are better. They are less false along an axis that is subtle (at first) and hard to put one’s finger on (at first), and which the practice of NVC, with all of its clumsy rules and forms, helps bring into focus.

And thus, even though I don’t actively use NVC, I’m glad I did actively use it, long enough for the update to sink in.

I think of NVC as being like a martial arts kata—a series of formalized dance steps that vaguely resemble the movements of combat. Practicing kata does not directly make one better at winning fights or defending oneself, and it might even be net negative if it instills in someone false confidence.

(Analogously: NVC practitioners thinking that they can’t possibly be engaged in violence, as long as they’re in compliance with the rules of a system that has “nonviolent” right in the name!)

But practicing kata does help one to sink into and absorb a new vocabulary of movement that is utterly unlike the movements of walking or typing or driving a car. It helps to reshape one’s sense of balance and one’s intuitions, to carve new neural pathways and ingrain new reflexes, and those things can indeed be subsequently recruited and reassembled and recombined in ways that help to win fights and defend oneself.

NVC is a similar kind of … stepping stone?

… or incubator, maybe.

It isn’t The Thing™, but it can help a certain kind of person find their way to The Thing™. It (eventually) causes (some) people to not want to say sentences like “you betrayed me”...

...and furthermore the root of that hesitation is not (in my experience and in the experience of those I’ve talked to about this) because those sentences are in conflict with NVC, but rather because those sentences actually aren’t (quite) true, in a way that NVC helped them learn to recognize and develop distaste for. It’s not that people are simply obeying the rules of a system; it’s that practicing within the system has genuinely improved their ability to see.


Background II: Nate Soares on Jargon

The following is the lightly edited text of a tweetstorm from March 20, 2021.

Thread about a particular way in which jargon is great:

In my experience, conceptual clarity is often attained by a large number of minor viewpoint shifts.

(A compliment I once got from a research partner went something like “Nate, you just keep reframing the problem ever-so-slightly until the solution seems obvious.” ❤️❤️)

Sometimes, a bunch of small shifts leave people talking a bit differently, because now they’re thinking a bit differently. The old phrasings don’t feel quite right—maybe they conflate distinct concepts, or rely implicitly on some bad assumption, etc.

(Coarse examples: folks who learn to think in probabilities might become awkward around definite statements of fact; people who get into NVC sometimes shift their language about thoughts and feelings. I claim that more subtle linguistic shifts regularly come hand-in-hand with good thinking.)

I suspect this phenomenon is one cause of jargon. For example, when a rationalist says “my model of Alice wouldn’t like that” instead of “I don’t think Alice would like that,” the non-standard phraseology is closely tracking a non-standard way they’re thinking about Alice.

(Or, at least, I think this is true of me and of many of the folks I interact with daily. I suspect phraseology is contagious and that bystanders may pick up the alternate manner of speaking without picking up the alternate manner of thinking, etc.)

Of course, there are various other causes of jargon—e.g., it can arise from naturally-occurring shorthand in some specific context where that shorthand was useful, and then morph into a tribal signal, etc. etc.

As such, I’m ambivalent about jargon. On the one hand, I prefer my communities to be newcomer-friendly and inclusive. On the other hand, I often hear accusations of jargon as a kind of thought-policing.

“Stop using phrases that meticulously track uncommon distinctions you’ve made; we already have perfectly good phrases that ignore those distinctions, and your audience won’t be able to tell the difference!”

No.

My internal language has a bunch of cool features that English lacks. I like these features, and speaking in a way that reflects them is part of the process of transmitting them.

Example: according to me, “my model of Alice wants chocolate” leaves Alice more space to disagree than “I think Alice wants chocolate,” in part because the denial is “your model is wrong,” rather than the more confrontational “you are wrong.”

In fact, “you are wrong” is a type error in my internal tongue. My English-to-internal-tongue translator chokes when I try to run it on “you’re wrong,” and suggests (e.g.) “I disagree,” or perhaps “you’re wrong about whether I want chocolate.”

“But everyone knows that ‘you’re wrong’ has a silent ‘(about X)’ parenthetical!” my straw conversational partner protests.

I disagree. English makes it all too easy to represent confused thoughts like “maybe I’m bad.”

If I were designing a language, I would not render it easy to assign properties like “correct” to a whole person—as opposed to, say, that person’s map of some particular region of the territory.

The “my model of Alice”-style phrasing is part of a more general program of distinguishing people from their maps. I don’t claim to do this perfectly, but I’m trying, and I appreciate others who are trying.

And this is a cool program! If you’ve tweaked your thoughts such that it’s harder to confuse someone’s correctness about a specific fact with their overall goodness, that’s rad, and I’d love you to leak some of your techniques to me via a niche phraseology.

There are lots of analogous language improvements to be made, and every so often a community has built some into their weird phraseology, and it’s wonderful. I would love to encounter a lot more jargon, in this sense.

(I sometimes marvel at the growth in expressive power of languages over time, and I suspect that that growth is often spurred by jargon in this sense. Ex: the etymology of “category.”)

Another part of why I flinch at jargon-policing is a suspicion that if someone regularly renders thoughts that track a distinction into words that don’t, it erodes the distinction in their own head. Maintaining distinctions that your spoken language lacks is difficult!

(This is a worry that arises in me when I imagine e.g. dropping my rationalist dialect.)

In sum, my internal dialect has drifted away from American English, and that suits me just fine, though your mileage may vary. I’ll do my best to be newcomer-friendly and inclusive, but I’m unwilling to drop distinctions from my words just to avoid an odd turn of phrase.

Thank you for coming to my TED talk. Maybe one day I’ll learn to cram an idea into a single tweet, but not today.


The Obvious Objection: Get Out Of My Head

Occasionally, I run across one of those click-bait-y articles that goes something like “Thirty emotions humans don’t have words for,” followed by thirty made-up words like “sonder” or “etterath.”

According to the above argument, I should probably find such lists useful and interesting, but in fact they tend to be simultaneously annoying and forgettable.

This personal experience seems to rhyme with some of the culture war stuff about e.g. “we don’t need five hundred damn words for every little quirk of sexuality.”

There’s also the fact that most readers of this essay were likely forced, at some point, to brute-force memorize large amounts of vocabulary for e.g. language courses or science courses, the vast majority of which will have leaked right back out of your head because they weren’t actually useful to you in the first place.

On the surface, these assorted issues would seem to present a strong case in favor of something like “new terminology is actually quite costly, and often useless,” which is a view I’ve heard explicitly expressed more than a few times.

I think this critique is slightly off. It seems to emerge from a sort of zero-sum or finite-resource mentality around fear of adopting new terms, with the implicit assumption being something like “we only have so many word-slots in our brains,” or “every bit of brain-currency spent on ingraining new terminology is a bit of brain-currency not spent on other, more useful things.”

But it seems to me that there’s a very big distinction between [words and phrases that just click, because they are an obvious, intuitive match for a concept that you’ve been trying to make explicit in your own head for a while now], and [things that you’re effortfully forcing yourself to memorize and remember, despite the fact that they don’t connect to anything you directly care about].

I claim that the former category is virtually costless in practice—that when new terminology tracks distinctions that you’re actually making in your mind, they stick very very easily. That what is actually costly and effortful and useless and non-sticky is terminology that doesn’t matter to you, and that pushback to the latter category is often rounded off to “no more new words or concepts, please,” when in fact the actual thing that people are fighting for is “no more new words or concepts that don’t track anything I care about, please.”

Uselessness, in other words, is relative. The feeling of “ugh, what is this again?” is what it’s like to encounter a concept you do not expect to need, and which is being forced upon you, somehow, by your work or your social context or whatever.

If you find a language request onerous, and it feels like a tedious burden, you are probably right, in the sense that this distinction you are being asked to track is not a thing that matters in your ontology.

But attacking the words and phrases themselves (or the process which is generating them) is a bucket error. The actual problem lies with the dynamic that is forcing you to pretend to care. The memetic ecosystem is an ecosystem, and not every meme will be fit for every niche—it’s both a mistake for the occupants of one niche to try to force all of their concepts into every other niche, and for the defensive, put-upon occupants of a niche being inundated with useless memes to try to delete those memes from the environment entirely, rather than simply from their immediate surroundings.

This is complicated by the facts that:

a) the people attempting to force new concepts into the lexicon sometimes wield substantial social power, and use it to punish non-adopters, such that you can’t always just eschew terms you don’t find worth their weight.

b) sometimes there is genuine memetic warfare going on, in the sense that the pushers of new terminology genuinely intend (whether consciously or subconsciously) to reshape the thoughts of the people whose speech they are trying to change.

For example, I had a friend who once made a request of the form “please use he/​him pronouns in reference to me,” only to discover later that what he had really wanted (without even making this explicit in his own mind) was for me to perceive him as a man. When he discovered that I did not perceive him as a man, he was deeply hurt and upset, and it took a substantial chunk of introspection for him to untangle precisely why.

That friend did not hold it against me (in part because he recognized that he hadn’t actually asked me to change my perceptions, and in part because he recognized that he shouldn’t ask me to change my perceptions). But there are other people out there who are less principled, some of whom are deliberately attempting to Trojan-horse new updates into the people around them via language-hacking.

Most of the people out there recommending “instead of X, say Y” are not, in fact, thinking X and then pausing to top-down effortfully translate X into Y before opening their mouths. Perhaps they did that at the start of their new language habit, while getting the hang of it, but typically what’s going on is that the language just straightforwardly reflects the underlying architecture. Typically, the reason other people find (e.g.) using “I” statements so much easier than you find it is that they are actually for real doing the “I” statement thing in their heads, at a deep level. It doesn’t feel burdensome or performative to them, because it isn’t—they’re simply living in a world where the “I” statement feels true, and the other thing does not. They’re producing sentences with roughly the same amount of effortlessness and ease that you produce sentences with, insofar as there isn’t some big layer of processing between [what they want to say] and [what they actually say].

But it’s also the case that thoughts can shift in response to language usage, and if you want someone to actually start thinking in “I” statements, one of the most reliable ways is to just make them top-down use “I” statements in their speech for a while.

(This is fine if it’s up-front and open—if the request is something like “hey, want to use language in a way that will change your perceptions and mental models?” It’s less fine when that’s an unacknowledged, hoped-for side effect of an explicit request specifically shaped so as to appear innocuous and small.)

I don’t know what to do about all that social stuff, besides sort of waving in its general direction and saying “shit’s on fire, yo.”

But separately, I think it’s important to understand that, a lot of the time, when language requests are being made and accepted/​rejected, there are disconnects where both sides are typical-minding.

Finley says that the language shift isn’t a burden (because to them, it isn’t), and Gale doesn’t even consider the hypothesis that Finley is being sincere (because to Gale it’s so obviously burdensome that there’s no way Finley can deny it with a straight face), and it’s real easy for both sides to lose track of what’s actually happening.

Often, the right answer seems to me to be “Oh, okay, yeah, your brain isn’t currently running an OS where this language shift is easy and makes sense. Yeah, please don’t ‘force’ it, maybe give me a chance some time to try to give you an update patch that will suddenly make this distinction feel real to you, but in the meantime, just … keep saying what you really mean and don’t fake-translate.”

Another way to say this is, if it feels to you like I am asking you to self-censor or do some meaningless laborious translation … I probably am actually not? I’m probably trying to get you to change the way you actually think, and the language shift is one way to help bring about that transition.

Which may be a thing you don’t want to do, of course! In my culture, you’re welcome to refuse such requests, because they are deeply intimate, and you are entirely within your rights to not let people inject code into your mental algorithms willy-nilly.

(I don’t know what to say on behalf of Homo Sapiens, which is on the whole less sane and forgiving and will indeed sometimes try to inseminate you with new conceptual distinctions regardless of whether you want them, and punish you if you resist.)

Overall, though, it seems straightforwardly false to me that we are, in general, running out of mental space for new concepts (and labels for them), as the people objecting to new terminology often object. Humanity is insanely hungry for new conceptual fodder; we are constantly inventing both brand-new terms and brand-new meanings for old terms.

Two years ago, nobody had “Let’s Go Brandon” in their lexicon, but now tens of millions of people do. Three years ago “PPE” was a moderately niche technical term known mostly to the blue-collar working class, and now it’s a household concept. “Flossing” had one definition, then two, then three, and now who-knows-how-many.

This is not causing most people problems, except when they are forced to absorb terms they don’t want to absorb—when I talk to less-nerdy friends about pastimes like gardening or kayaking or sports or whatever, they almost always have some new term or technique to share, some new distinction they previously hadn’t made but whose addition to their vocabulary has opened up new possibilities for them[2].

For me, new terminology falls into one of three buckets:

  • Obviously useful; tends to be adopted by my brain via a nearly-automatic process

  • Obviously useless; tends to trickle right out without costing me anything

  • Intriguing or of uncertain value; flagged for potential effortful exploration (à la NVC)

...none of these buckets leave me feeling resentful of new words as they come in, which is an experience that a lot of people seem to have fairly regularly. I think the key thing is that I simply do not expose myself to people who are going to punish me for maintaining critical oversight of my own conceptual boundaries; it’s a truism that anyone who wants you to stop thinking isn’t your friend but it’s equally true that anyone who insists that you think in exactly the way they’ve deemed proper is also not your friend. Or at least, they don’t see you as a friend, so much as a piece of clay to mold into a shape they find useful to their goals and priorities. Yuck.


Sapir-Whorf for Rationalists

The Sapir-Whorf Hypothesis is a claim derived from the works of Edward Sapir and Benjamin Lee Whorf, who Wikipedia tells me never published anything together and did not think of their assertions as a hypothesis.

In short, the SWH states that the structure of a language determines (or at least influences) a speaker’s perception and categorization of experience.

Sapir-Whorf reinterpreted for rationalists would go something like:

The way we go about expressing and presenting our thoughts influences the shape of those thoughts, and there are changes that we can make to our speech which at first will feel laborious or arbitrary, but which in the long term can cause our minds to fall into a configuration more conducive for clear thinking and clear communication.

Therefore, contexts in which people are trying to be more rational, and trying to coax rationality out of others, are also contexts in which it pays to enforce and adhere to clear and unambiguous norms of rational discourse. In particular: it pays to choose norms of discourse such that things which are less true are less easy to say.

In practice, this doesn’t mean inventing a bunch of new terminology so much as actually bothering to track fine (but commonplace) distinctions between near-synonymous phrasings, in a way that is already pretty natural for most people.

For instance, below I have five versions of the same claim, in a random order; I would wager that >50% of readers would agree on a ranking of those five sentences from weakest/​most uncertain to strongest/​most confident, and that if you allow for one line to be one slot out of place, agreement would jump up to 85+%:

“I claim that passe muraille is just a variant of tic-tac.”

“Obviously, passe muraille is just a variant of tic-tac.”

“It seems to me that passe muraille is just a variant of tic-tac.”

“I might be missing something, but as far as I can tell, the most sensible way to think of passe muraille is as a variant of tic-tac.”

“Passe muraille is just a variant of tic-tac.”

There are a lot of reasons why people argue that the distinctions between these sentences shouldn’t matter—

(Two of the more common ones being “I don’t want to put in the effort to track it” and “It’s useful for me to be able to equivocate between them, e.g. using verbal markers of confidence for emphasis rather than to express strength-of-justified-belief.”)

—but if you’re in a subculture whose explicit goal is clear thinking, clear communication, and collaborative truth-seeking, it seems pretty likely to me that you’ll get further if you can sustain common-knowledge agreement that these sentences are, in fact, different. That they mean different things, in the sense that they convey different strengths-of-claim in practice, and that it’s disingenuous to pretend otherwise, and counterproductive to “let” people use them interchangeably as if they were straightforwardly synonymous.

I often object to certain conversational moves, and occasionally that objection takes the form of me attempting to rewrite what my conversational partner said—trying to express what I think they genuinely believe, and meant to convey, without violating Duncan-norms in the way their original version did.

(Because usually there is indeed something in there that’s expressible in Duncan-culture; it is an intentional feature of Duncan-culture that many more things may be prosocially expressed than in most enclaves of American culture.)

After doing this, though, I often get back a counterobjection of the form “why should I have to put in that much interpretive labor?” or “if I have to put in that much work, I’m just never going to say anything[3]” or “yeah, no, I’m not gonna arbitrarily swap out words to meet some opaque and inscrutable standard.”

And there seem to me to be several things going on in that sort of response, most of which aren’t appropriate to dive into here. But there is one aspect of it that sticks out, which is that it seems to me that such people assume/​believe that I’m doing something like applying a politeness filter after the fact, or tacking on empty catch phrases to appease the audience, or similar.

Which is just not what’s happening, ever—at least, not in my head. The differences between:

  • Harley is a liar

  • Harley is lying

  • It seems like Harley is lying

  • It seems to me like Harley is lying

  • I’m having a hard time understanding how Harley could be being honest, here

… are quite real, and quite salient in Duncan-culture. Those are not sentences which would ever be mistaken for one another and the differences between them are not cosmetic; they are crucial.

It makes sense to me that someone who does not see the distinction might find it meaningless, and think that it’s performative, and might therefore feel some distaste and some resistance at the idea of being asked to pantomime it for purely social reasons.

But that person would be mistaken about what’s actually being asked of them, at least so long as I’m the one doing the asking.

And yes, it’s sometimes onerous to craft your speech with care and precision (or to be willing to go back and rephrase a clumsy first draft). But that … comes with the territory? i.e. there’s a way in which you’re either here for the goal of being less wrong, or you’re not, and it shouldn’t be super controversial to say “there’s a minimum amount of effort and conformity required for participation,” just like it’s not controversial to insist that people play by the rules of soccer if they want to be on a soccer team.

Some people get this stuff wrong because they haven’t learned the rules yet, and I think those people deserve guidance and help (some of which is available in the post Basics of Rationalist Discourse). And some people get it wrong because they’re not perfect, and they need more practice or they had a bad day.

But others seem to me to get it wrong because they are actively hostile to the concept of putting in more work to accomplish the very goal we’re here to accomplish, and I’m much less sympathetic to those people. It’s one thing to reject pronoun requests out in the middle of a crowded supermarket; it’s another thing to register a username on a transgender forum and then grumble about how hard it is to track everyone’s preferred pronouns.


A restatement of the thesis from a different direction

As we convert our nonverbal observations, impressions, and reactions into verbal, explicit thoughts, and as we convert our verbal, explicit thoughts into external speech, we each follow the norms and habits typical of our own unique cultures. There is, for each of us, a way that our thinking tends to go (and possibly a few ways, if we have a few salient and very-different contexts, e.g. “me at work” or “me while depressed”).

These norms and habits do not only have forward-facing impact—they do not only shape the verbal thoughts as they emerge from the nonverbal, or the external speech as it emerges from the internal monologue. They also “reach backward,” in a sense, shaping our perceptions and the mental buckets into which we divide our experiences. Norms of speech begin to influence one’s private thoughts, and norms of private thoughts begin to influence one’s preverbal processing—over time, it becomes easier to think in ways which match the modes of explicit expression that one regularly engages in.

(This is the power of cognitive behavioral therapy (using thoughts to shape psychological state) and the dynamic described above with NVC (using words to shape thoughts).)

It is possible, therefore, to (marginally) influence one’s habits of mind by intervening on one’s lexicon. If one is having a hard time thinking more rationally by sheer force of will, one may have more luck conforming to a marginally-more-rational mode of speech, which will both force one to find better versions of one’s own thoughts which are legal in the new mode, and also sensitize one to new conceptual distinctions that weren’t present in the old mode.

This will eventually propagate backward to a nonzero degree, just as sloppy/​foggy/​truth-agnostic speech also propagates backward, encouraging sloppy or foggy or truth-agnostic thinking.

Therefore, contexts in which people are trying to be more rational, and trying to coax rationality out of others, are also contexts in which it pays to enforce and adhere to clear and unambiguous norms of rational discourse. In particular, it pays to choose norms of discourse such that things which are less true are less easy to say.


Conclusion

I’ve drifted a little from the generic “why new words at all?” and more into “what should LessWrong’s norms be?” so to refocus:

New words and phrases are good and useful because they either:

  1. Track new conceptual distinctions, allowing us to preserve our ability to make those distinctions and communicate our thoughts around them

  2. Help guide us toward conceptual distinctions that are new to us, via language hacking

Both of these things are really super duper cool, and as such I think it’s quite bad to mistake “I find the fiat imposition of new words costly” (true) for “the generation of new conceptual distinctions and verbal labels to track those distinctions is costly” (basically false).

A healthy culture should indeed not force people to use language they find meaningless or useless, and our culture is doing somewhat poorly on this axis (this often gets rounded off to “political correctness” but it crops up in more places than just that).

But a healthy culture should also do far less than our culture does in the way of offering blanket discouragement of the generation, dissemination, and adoption of new terminology. The problem is one of trying to have a single nonbranching norm that is least bad for everybody, rather than just building the largest possible pile of conceptual handles and letting memetic evolution take its course unhindered by frowning shoulds.

Or, to put it more bluntly: the words in the dictionary that you don’t care about are not the problem; the problem is the people forcing you to memorize the ones you have no use for.

The end!


  1. ^

    This one was actually in the list of hopefully-to-be-written when I first began drafting this post, and got published before this one. Hooray!

  2. ^

    My own vocabulary is absolutely waxing, year by year and sometimes even week by week; I set a five-minute timer to jot down new conceptual handles I’ve added to my lexicon in the past ten years and ran out of time long before I would have run out of words:

    Moloch
    Mirror-sword
    Dropped in
    Coferences
    Trigger
    Murphyjitsu
    IFS (Internal Family Systems)
    IDC (Internal Double Crux)
    ITT (Ideological Turing Test)
    TDT (Timeless Decision Theory)
    CEV (Coherent Extrapolated Volition)
    Shoulder advisor
    Corrigibility
    Sphexishness
    Bayesian update
    Felt sense
    Doom (as in doom circle)
    Secretary problem
    Convergent goal
    Representativeness
    Fundamental attribution error
    Bucket error
    Broccoli error
    Cartesian agency
    Embedded agency
    Existential risk
    Tail risk
    Black swan
    Right-tail distribution
    8020
    Area-under-the-curve
    Commensurability
    Shoulds
    Scissor statements
    Play to your outs
    Lenticular design
    Timmy, Johnny, Spike, Melvin, Vorthos
    Newcomb’s box problem
    Stag hunt
    Schelling point
    Chesterton’s Fence
    Goodhart
    Kegan levels
    Subject-object shift
    Hamming problem
    Ketosis
    Paleo diet
    Ball-heel-ball
    Gymnophobia
    Sastisficing
    Diachronic
    Episodic

    ...each of these is something I could easily give a talk or write a short essay on; each of these is something that I frequently use or reference in my own month-to-month life, if not day-to-day. And I didn’t even make it to the part of the brainstorm where I was vaguely anticipating talking about all sorts of memes and pop-culture references, of which I have certainly added hundreds and very likely thousands, in the past ten years.

  3. ^

    My knee-jerk uncharitable reaction to this sort of sentiment, which I include for the sake of candor even though it fails on several axes that are pretty important for cooperative discourse, goes something like ”...you are aware that Eliezer wrote an essay every day for over a year, right? I mean, I get that most people can’t and shouldn’t try to hold themselves to that standard, but it seems like that shining standard should inspire some unusual effort in response. Like, if you’re not going to specifically spend spoons on precision and clarity here—if you’re just going to put forth the same amount of effort you put forth everywhere else—then … don’t be here? It feels like you just walked into a martial arts dojo and said ‘eh, doing all those kicks is too much work.’ If it’s really actually the case that saying true and accurate things is too hard, and therefore your actual options are ‘spout gunk’ or ‘say nothing,’ I have a genuine preference for the latter, and a genuine preference about where the norms settle.”