# Rationality Reading Group: Part N: A Human’s Guide to Words

This is part of a semi-monthly reading group on Eliezer Yudkowsky’s ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.

Welcome to the Rationality reading group. This fortnight we discuss Part N: A Human’s Guide to Words (pp. 677-801) and Interlude: An Intuitive Explanation of Bayes’s Theorem (pp. 803-826). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

### N. A Human’s Guide to Words

153. The Parable of the DaggerA word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?

154. The Parable of HemlockYour argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever?

You try to establish any sort of empirical proposition as being true “by definition”. Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn’t keel over—where he’s immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in—and anything you can establish “by definition” is a logical truth.

You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is “human”, even though, on your definition, you can never call Bob “human” without first observing him to be mortal.

The mere presence of words can influence thinking, sometimes misleading it.

The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs “bleggs” and the red cubes “rubes”, you may reach into the barrel, feel an egg shape, and think “Oh, a blegg.”

You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example. “What is red?” “Red is a color.” “What’s a color?” “It’s a property of a thing?” “What’s a thing? What’s a property?” It never occurs to you to point to a stop sign and an apple.

The extension doesn’t match the intension. We aren’t consciously aware of our identification of a red light in the sky as “Mars”, which will probably happen regardless of your attempt to define “Mars” as “The God of War”.

157. Similarity ClustersYour verbal definition doesn’t capture more than a tiny fraction of the category’s shared characteristics, but you try to reason as if it does. When the philosophers of Plato’s Academy claimed that the best definition of a human was a “featherless biped”, Diogenes the Cynic is said to have exhibited a plucked chicken and declared “Here is Plato’s Man.” The Platonists promptly changed their definition to “a featherless biped with broad nails”.

158. Typicality and Asymmetrical SimilarityYou try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters. Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a disease was more likely to spread from robins to ducks on an island, than from ducks to robins.

A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you’ll get enough information that the occasional nine-fingered human won’t fool you.

160. Disguised QueriesYou ask whether something “is” or “is not” a category member but can’t name the question you really want answered. What is a “man”? Is Barney the Baby Boy a “man”? The “correct” answer may depend considerably on whether the query you really want answered is “Would hemlock be a good thing to feed Barney?” or “Will Barney make a good husband?”

161. Neural CategoriesYou treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn’t use them. It’s much easier for a human to notice whether an object is a “blegg” or “rube”; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs. Other statistical algorithms work differently.

You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said “Socrates is a man”, not, “My brain perceptually classifies Socrates as a match against the ‘human’ concept”.

You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what’s left to ask by arguing, “Is it a blegg?” But if your brain’s categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there’s a leftover question.

You allow an argument to slide into being about definitions, even though it isn’t what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a “sound”, you asked the two soon-to-be arguers whether they thought a “sound” should be defined as “acoustic vibrations” or “auditory experiences”, they’d probably tell you to flip a coin. Only after the argument starts does the definition of a word become politically charged.

164. Feel the MeaningYou think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept. When someone shouts, “Yikes! A tiger!”, evolution would not favor an organism that thinks, “Hm… I have just heard the syllables ‘Tie’ and ‘Grr’ which my fellow tribemembers associate with their internal analogues of my owntiger concept and which aiiieeee CRUNCH CRUNCH GULP.” So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself. People argue about the correct meaning of a label like “sound”.

You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we’re hard to stop; if we have no common language, we’ll draw pictures in sand. When you each understand what is in the other’s mind, you are done.

You pull out a dictionary in the middle of an empirical or moral argument. Dictionary editors are historians of usage, not legislators of language. If the common definition contains a problem—if “Mars” is defined as the God of War, or a “dolphin” is defined as a kind of fish, or “Negroes” are defined as a separate category from humans, the dictionary will reflect the standard mistake.

You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether “atheism” is a “religion” or whatever? If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?

You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.

166. Empty LabelsYou use complex renamings to create the illusion of inference. Is a “human” defined as a “mortal featherless biped”? Then write: “All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal.” Looks less impressive that way, doesn’t it?

167. Taboo Your WordsIf Albert and Barry aren’t allowed to use the word “sound”, then Albert will have to say “A tree falling in a deserted forest generates acoustic vibrations”, and Barry will say “A tree falling in a deserted forest generates no auditory experiences”. When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.

The existence of a neat little word prevents you from seeing the details of the thing you’re trying to think about. What actually goes on in schools once you stop calling it “education”? What’s a degree, once you stop calling it a “degree”? If a coin lands “heads”, what’s its radial orientation? What is “truth”, if you can’t say “accurate” or “correct” or “represent” or “reflect” or “semantic” or “believe” or “knowledge” or “map” or “real” or any other simple term?

You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket. It’s part of a detective’s ordinary work to observe that Carol wore red last night, or that she has black hair; and it’s part of a detective’s ordinary work to wonder if maybe Carol dyes her hair. But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.

You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension. In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.

You try to sneak in the connotations of a word, by arguing from a definition that doesn’t include the connotations. A “wiggin” is defined in the dictionary as a person with green eyes and black hair. The word “wiggin” also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn’t in the dictionary. So you point to someone and say: “Green eyes? Black hair? See, told you he’s a wiggin! Watch, next he’s going to steal the silverware.”

172. Arguing “By Definition”You claim “X, by definition, is a Y!” On such occasions you’re almost certainly trying to sneak in a connotation of Y that wasn’t in your given definition. You define “human” as a “featherless biped”, and point to Socrates and say, “No feathers—two legs—he must be human!” But what you really care about is something else, like mortality. If what was in dispute was Socrates’s number of legs, the other fellow would just reply, “Whaddaya mean, Socrates’s got two legs? That’s what we’re arguing about in the first place!”

You claim “Ps, by definition, are Qs!” If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there’s no point in arguing “Men, by definition, are mortal!” The main time you feel the need to tighten the vise by insisting that something is true “by definition” is when there’s other information that calls the default inference into doubt.

You try to establish membership in an empirical cluster “by definition”. You wouldn’t feel the need to say, “Hinduism, by definition, is a religion!” because, well, of course Hinduism is a religion. It’s not just a religion “by definition”, it’s, like, an actual religion. Atheism does not resemble the central members of the “religion” cluster, so if it wasn’t for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn’t a religion. That’s why you’ve got to crush all opposition by pointing out that “Atheism is a religion” is true by definition, because it isn’t true any other way.

Your definition draws a boundary around things that don’t really belong together. You can claim, if you like, that you are defining the word “fish” to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae. You can claim, if you like, that this is merely a list, and there is no way a list can be “wrong”. Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don’t belong on the fish list.

You use a short word for something that you won’t need to describe often, or a long word for something you’ll need to describe often. This can result in inefficient thinking, or even misapplications of Occam’s Razor, if your mind thinks that short sentences sound “simpler”. Which sounds more plausible, “God did a miracle” or “A supernatural universe-creating entity temporarily suspended the laws of physics”?

You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences. Since green-eyed people are not more likely to have black hair, or vice versa, and they don’t share any other characteristics in common, why have a word for “wiggin”?

You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious. If you don’t present reasons to draw that particular boundary, trying to create an “arbitrary” word in that location is like a detective saying: “Well, I haven’t the slightest shred of support one way or the other for who could’ve murdered those orphans… but have we considered John Q. Wiffleheim as a suspect?”

You use categorization to make inferences about properties that don’t have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes. No way am I trying to summarize this one. Just read the blog post.

You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace. Visualize a “triangular lightbulb”. What did you see?

You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting.”Martin told Bob the building was on his left.” But “left” is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context. Whose “left” is meant, Bob’s or Martin’s?

Contains summaries of the sequence of posts about the proper use of words.

Interlude: An Intuitive Explanation of Bayes’s Theorem—Exactly what it says on the tin.

This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover The World: An Introduction (pp. 834-839) and Part O: Lawful Truth (pp. 843-883). The discussion will go live on Wednesday, 2 December 2015, right here on the discussion forum of LessWrong.

• Sorry if this will be confused; I am reading Korzybski’s Science and Sanity, and seems to me that one of the main ideas in the book is that our perception of the world gets encoded in the language, and then the usage of the old-style language is an obstacle for the new improved models of the world.

For example, when the words don’t exactly correspond to the territory, but they are in everyone’s vocabulary. No matter how much we learned about the Solar system, we still talk about how “Sun rises”. Despite having separate words for “time” and “space”, to understand relativity one has to understand that they are not separate; that there is actually a “space-time”. People debate whether some behavior is caused by genes or caused by environment, while in fact it is a result of some specific genes in some specific environment.

So… maybe we should update our language to represent correctly our current scientific knowledge? And maybe that would remove a few unconscious obstacles, and thus make us all a bit more rational? (Don’t ask me how to do that specifically. I haven’t finished reading the book yet.)

And it’s not just the individual words, but also the way of typically using them, which reflects some implicit assumptions. If I understand Korzybski’s argument correctly, the “Aristotelian” way of using language is having nouns which represent some kind of essence, and then attaching various attributes to these nouns. The newer, “non-Aristotelian” way of using language describes structures, relationships between the parts.

As an example, imagine an ancient myth, where the creator of the world first created e.g. the grass, and only afterwards decided to give the grass a green color. In the language of nouns and adjectives this makes perfect sense: there was a noun, and later it received an adjective. In the language of science, it’s like: WTF?! did the original ‘colorless’ grass contain chloroplasts or didn’t it? Even if we try to go along with the story, adding chloroplasts is not merely a change of adjective: it means radically changing the functionality of the plant. -- An important part of rational thinking is seeing the world as a mechanism of gears, instead of a set of black-box essences that mysteriously get assigned some adjectives.

This reminds me of how some people try to pretend knowledge in some area by learning a few keywords, and then using them in a wrong way. They have the correct “nouns” and “adjectives”, but they lack the “structure”.

• I am reading Korzybski’s Science and Sanity

FYI eliezer recommended language in thought and action over Science and Sanity.

So… maybe we should update our language to represent correctly our current scientific knowledge? And maybe that would remove a few unconscious obstacles, and thus make us all a bit more rational? (Don’t ask me how to do that specifically. I haven’t finished reading the book yet.)

The whorfian hypothesis states that our perception of reality is determined by our thought processes, which are influenced by the language we use. In this way language shapes our reality and tells us how to think about and respond to that reality. Generally, the Whorfian hypothesis is seen as too extreme and it makes more sense to talk about the question of linguistic relativity in terms of degree instead of absoluteness or determinism. But, the hypothesis is not totally wrong either. Language plays a role in shaping our thoughts and in modifying our perception. So, I suppose changing the language might help. The thing with language, though, is that it is many ways a product of the people who use it. Through use it evolves and changes. So, I think you have it the wrong way around. Controlling or tampering with the language that people use is going to be very hard, but once you change people’s paradigms then this will flow into the language and it will change naturally .

On a related note, I would suspect that quantum theory would be easier for people to comprehend if we had a more native american world view. There is book called Blackfoot Physics around this idea.

• Thanks, I already forgot that debate. Now it makes much more sense after I’ve seen the book!

Re: whorfian hypothesis—I guess the important thing when debating the impact of language on perception is to be specific about which parts of the language impact which parts of perception. For example, if the language instead of one word for “blue” uses two different words for “light blue” and “dark blue”, it may make the people perceive things about colors differently (i.e. where a person from one culture would insist that two objects have ‘the same color’, a person from another culture would insist they have ‘two different colors’), but ultimately the effect is limited to thinking about colors in some part of color spectrum. But this specific mapping of language differences to perception differences is usually ignored, and people just give a few language differences, often trivial, and then claim that any change of perception can happen.

• Clusters in thingspace have been bothering me. Or rather, EY’s discussion of them. What I want to do is rephrase the term as “clusters-according-to-cognitive-system-C.” Thingspace is high-dimensional, and which dimensions loom larger than others depends on the perceptual and motivational structure of the cognizer. In machine learning classification algorithms, it’s common to normalize each dimension by its mean and standard deviation, or by its extrema, but there’s no hard and fast rule. And besides, first one typically selects which dimensions to model.

If dimensions aren’t normalized, it matters how we scale them. Are two events separated by ten minutes and no spatial distance, closer or farther apart than two events separated by 200 km and one second? Well, it depends. Are we studying cosmology, or planning a vacation?

None of this invalidates EY’s cautions about the use of words. It just adds one more aspect to watch out for.