Where to Draw the Boundaries?

Followup to: Where to Draw the Boundary?

Figuring where to cut reality in order to carve along the joints—figuring which things are similar to each other, which things are clustered together: this is the problem worthy of a rationalist. It is what people should be trying to do, when they set out in search of the floating essence of a word.

Once upon a time it was thought that the word “fish” included dolphins …

The one comes to you and says:

The list: {salmon, guppies, sharks, dolphins, trout} is just a list—you can’t say that a list is wrong. You draw category boundaries in specific ways to capture tradeoffs you care about: sailors in the ancient world wanted a word to describe the swimming finned creatures that they saw in the sea, which included salmon, guppies, sharks—and dolphins. That grouping may not be the one favored by modern evolutionary biologists, but an alternative categorization system is not an error, and borders are not objectively true or false. You’re not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. So my definition of fish cannot possibly be ‘wrong,’ as you claim. I can define a word any way I want—in accordance with my values!

So, there is a legitimate complaint here. It’s true that sailors in the ancient world had a legitimate reason to want a word in their language whose extension was {salmon, guppies, sharks, dolphins, ...}. (And modern scholars writing a translation for present-day English speakers might even translate that word as fish, because most members of that category are what we would call fish.) It indeed would not necessarily be helping the sailors to tell them that they need to exclude dolphins from the extension of that word, and instead include dolphins in the extension of their word for {monkeys, squirrels, horses ...}. Likewise, most modern biologists have little use for a word that groups dolphins and guppies together.

When rationalists say that definitions can be wrong, we don’t mean that there’s a unique category boundary that is the True floating essence of a word, and that all other possible boundaries are wrong. We mean that in order for a proposed category boundary to not be wrong, it needs to capture some statistical structure in reality, even if reality is surprisingly detailed and there can be more than one such structure.

The reason that the sailor’s concept of water-dwelling animals isn’t necessarily wrong (at least within a particular domain of application) is because dolphins and fish actually do have things in common due to convergent evolution, despite their differing ancestries. If we’ve been told that “dolphins” are water-dwellers, we can correctly predict that they’re likely to have fins and a hydrodynamic shape, even if we’ve never seen a dolphin ourselves. On the other hand, if we predict that dolphins probably lay eggs because 97% of known fish species are oviparous, we’d get the wrong answer.

A standard technique for understanding why some objects belong in the same “category” is to (pretend that we can) visualize objects as existing in a very-high-dimensional configuration space, but this “Thingspace” isn’t particularly well-defined: we want to map every property of an object to a dimension in our abstract space, but it’s not clear how one would enumerate all possible “properties.” But this isn’t a major concern: we can form a space with whatever properties or variables we happen to be interested in. Different choices of properties correspond to different cross sections of the grander Thingspace. Excluding properties from a collection would result in a “thinner”, lower-dimensional subspace of the space defined by the original collection of properties, which would in turn be a subspace of grander Thingspace, just as a line is a subspace of a plane, and a plane is a subspace of three-dimensional space.

Concerning dolphins: there would be a cluster of water-dwelling animals in the subspace of dimensions that water-dwelling animals are similar on, and a cluster of mammals in the subspace of dimensions that mammals are similar on, and dolphins would belong to both of them, just as the vector [1.1, 2.1, 9.1, 10.2] in the four-dimensional vector space ℝ⁴ is simultaneously close to [1, 2, 2, 1] in the subspace spanned by x₁ and x₂, and close to [8, 9, 9, 10] in the subspace spanned by x₃ and x₄.

Humans are already functioning intelligences (well, sort of), so the categories that humans propose of their own accord won’t be maximally wrong: no one would try to propose a word for “configurations of matter that match any of these 29,122 five-megabyte descriptions but have no other particular properties in common.” (Indeed, because we are not-superexponentially-vast minds that evolved to function in a simple, ordered universe, it actually takes some ingenuity to construct a category that wrong.)

This leaves aspiring instructors of rationality in something of a predicament: in order to teach people how categories can be more or (ahem) less wrong, you need some sort of illustrative example, but since the most natural illustrative examples won’t be maximally wrong, some people might fail to appreciate the lesson, leaving one of your students to fill in the gap in your lecture series eleven years later.

The pedagogical function of telling people to “stop playing nitwit games and admit that dolphins don’t belong on the fish list” is to point out that, without denying the obvious similarities that motivated the initial categorization {salmon, guppies, sharks, dolphins, trout, ...}, there is more structure in the world: to maximize the (logarithm of the) probability your world-model assigns to your observations of dolphins, you need to take into consideration the many aspects of reality in which the grouping {monkeys, squirrels, dolphins, horses ...} makes more sense. To the extent that relying on the initial category guess would result in a worse Bayes-score, we might say that that category is “wrong.” It might have been “good enough” for the purposes of the sailors of yore, but as humanity has learned more, as our model of Thingspace has expanded with more dimensions and more details, we can see the ways in which the original map failed to carve reality at the joints.


The one replies:

But reality doesn’t come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins “fish” and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.

No. Everything we identify as a joint is a joint not “because we care about it”, but because it helps us think about the things we care about.

Which dimensions of Thingspace you bother paying attention to might depend on your values, and the clusters returned by your brain’s similarity-detection algorithms might “split” or “collapse” according to which subspace you’re looking at. But in order for your map to be useful in the service of your values, it needs to reflect the statistical structure of things in the territory—which depends on the territory, not your values.

There is an important difference between “not including mountains on a map because it’s a political map that doesn’t show any mountains” and “not including Mt. Everest on a geographic map, because my sister died trying to climb Everest and seeing it on the map would make me feel sad.”

There is an important difference between “identifying this pill as not being ‘poison’ allows me to focus my uncertainty about what I’ll observe after administering the pill to a human (even if most possible minds have never seen a ‘human’ and would never waste cycles imagining administering the pill to one)” and “identifying this pill as not being ‘poison’, because if I publicly called it ‘poison’, then the manufacturer of the pill might sue me.”

There is an important difference between having a utility function defined over a statistical model’s performance against specific real-world data (even if another mind with different values would be interested in different data), and having a utility function defined over features of the model itself.

Remember how appealing to the dictionary is irrational when the actual motivation for an argument is about whether to infer a property on the basis of category-membership? But at least the dictionary has the virtue of documenting typical usage of our shared communication signals: you can at least see how “You’re defecting from common usage” might feel like a sensible thing to say, even if one’s true rejection lies elsewhere. In contrast, this motion of appealing to personal values (!?!) is so deranged that Yudkowsky apparently didn’t even realize in 2008 that he might need to warn us against it!

You can’t change the categories your mind actually uses and still perform as well on prediction tasks—although you can change your verbally reported categories, much as how one can verbally report “believing” in an invisible, inaudible, flour-permeable dragon in one’s garage without having any false anticipations-of-experience about the garage.

This may be easier to see with a simple numerical example.

Suppose we have some entities that exist in the three-dimensional vector space ℝ³. There’s one cluster of entities centered at [1, 2, 3], and we call those entities Foos, and there’s another cluster of entities centered at [2, 4, 6], which we call Quuxes.

The one comes and says, “Well, I’m going redefine the meaning of ‘Foo’ such that it also includes the things near [2, 4, 6] as well as the Foos-with-respect-to-the-old-definition, and you can’t say my new definition is wrong, because if I observe [2, _, _] (where the underscores represent yet-unobserved variables), I’m going to categorize that entity as a Foo but still predict that the unobserved variables are 4 and 6, so there.”

But if the one were actually using the new concept of Foo internally and not just saying the words “categorize it as a Foo”, they wouldn’t predict 4 and 6! They’d predict 3 and 4.5, because those are the average values of a generic Foo-with-respect-to-the-new-definition in the 2nd and 3rd coordinates (because (2+4)/​2 = 62 = 3 and (3+6)/​2 = 92 = 4.5). (The already-observed 2 in the first coordinate isn’t average, but by conditional independence, that only affects our prediction of the other two variables by means of its effect on our “prediction” of category-membership.) The cluster-structure knowledge that “entities for which x₁≈2, also tend to have x₂≈4 and x₃≈6” needs to be represented somewhere in the one’s mind in order to get the right answer. And given that that knowledge needs to be represented, it might also be useful to have a word for “the things near [2, 4, 6]” in order to efficiently share that knowledge with others.

Of course, there isn’t going to be a unique way to encode the knowledge into natural language: there’s no reason the word/​symbol “Foo” needs to represent “the stuff near [1, 2, 3]” rather than “both the stuff near [1, 2, 3] and also the stuff near [2, 4, 6]”. And you might very well indeed want a short word like “Foo” that encompasses both clusters, for example, if you want to contrast them to another cluster much farther away, or if you’re mostly interested in x₁ and the difference between x₁≈1 and x₁≈2 doesn’t seem large enough to notice.

But if speakers of particular language were already using “Foo” to specifically talk about the stuff near [1, 2, 3], then you can’t swap in a new definition of “Foo” without changing the truth values of sentences involving the word “Foo.” Or rather: sentences involving Foo-with-respect-to-the-old-definition are different propositions from sentences involving Foo-with-respect-to-the-new-definition, even if they get written down using the same symbols in the same order.

Naturally, all this becomes much more complicated as we move away from the simplest idealized examples.

For example, if the points are more evenly distributed in configuration space rather than belonging to cleanly-distinguishable clusters, then essentialist “X is a Y” cognitive algorithms perform less well, and we get Sorites paradox-like situations, where we know roughly what we mean by a word, but are confronted with real-world (not merely hypothetical) edge cases that we’re not sure how to classify.

Or it might not be obvious which dimensions of Thingspace are most relevant.

Or there might be social or psychological forces anchoring word usages on identifiable Schelling points that are easy for different people to agree upon, even at the cost of some statistical “fit.”

We could go on listing more such complications, where we seem to be faced with somewhat arbitrary choices about how to describe the world in language. But the fundamental thing is this: the map is not the territory. Arbitrariness in the map (what color should Texas be?) doesn’t correspond to arbitrariness in the territory. Where the structure of human natural language doesn’t fit the structure in reality—where we’re not sure whether to say that a sufficiently small collection of sand “is a heap”, because we don’t know how to specify the positions of the individual grains of sand, or compute that the collection has a Standard Heap-ness Coefficient of 0.64—that’s just a bug in our human power of vibratory telepathy. You can exploit the bug to confuse humans, but that doesn’t change reality.

Sometimes we might wish that something to belonged to a category that it doesn’t (with respect to the category boundaries that we would ordinarily use), so it’s tempting to avert our attention from this painful reality with appeal-to-arbitrariness language-lawyering, selectively applying our philosophy-of-language skills to pretend that we can define a word any way we want with no consequences. (“I’m not late!—well, okay, we agree that I arrived half an hour after the scheduled start time, but whether I was late depends on how you choose to draw the category boundaries of ‘late’, which is subjective.”)

For this reason it is said that knowing about philosophy of language can hurt people. Those who know that words don’t have intrinsic definitions, but don’t know (or have seemingly forgotten) about the three or six dozen optimality criteria governing the use of words, can easily fashion themselves a Fully General Counterargument against any claim of the form “X is a Y”—

Y doesn’t unambiguously refer to the thing you’re trying to point at. There’s no Platonic essence of Y-ness: once we know any particular fact about X we want to know, there’s no question left to ask. Clearly, you don’t understand how words work, therefore I don’t need to consider whether there are any non-ontologically-confused reasons for someone to say “X is a Y.”

Isolated demands for rigor are great for winning arguments against humans who aren’t as philosophically sophisticated as you, but the evolved systems of perception and language by which humans process and communicate information about reality, predate the Sequences. Every claim that X is a Y is an expression of cognitive work that cannot simply be dismissed just because most claimants doesn’t know how they work. Platonic essences are just the limiting case as the overlap between clusters in Thingspace goes to zero.

You should never say, “The choice of word is arbitrary; therefore I can say whatever I want”—which amounts to, “The choice of category is arbitrary, therefore I can believe whatever I want.” If the choice were really arbitrary, you would be satisfied with the choice being made arbitrarily: by flipping a coin, or calling a random number generator. (It doesn’t matter which.) Whatever criterion your brain is using to decide which word or belief you want, is your non-arbitrary reason.

If what you want isn’t currently true in reality, maybe there’s some action you could take to make it become true. To search for that action, you’re going to need accurate beliefs about what reality is currently like. To enlist the help of others in your planning, you’re going to need precise terminology to communicate accurate beliefs about what reality is currently like. Even when—especially when—the current reality is inconvenient.

Even when it hurts.

(Oh, and if you’re actually trying to optimize other people’s models of the world, rather than the world itself—you could just lie, rather than playing clever category-gerrymandering mind games. It would be a lot simpler!)


Imagine that you’ve had a peculiar job in a peculiar factory for a long time. After many mind-numbing years of sorting bleggs and rubes all day and enduring being trolled by Susan the Senior Sorter and her evil sense of humor, you finally work up the courage to ask Bob the Big Boss for a promotion.

“Sure,” Bob says. “Starting tomorrow, you’re our new Vice President of Sorting!”

“Wow, this is amazing,” you say. “I don’t know what to ask first! What will my new responsibilities be?”

“Oh, your responsibilities will be the same: sort bleggs and rubes every Monday through Friday from 9 a.m. to 5 p.m.

You frown. “Okay. But Vice Presidents get paid a lot, right? What will my salary be?”

“Still $9.50 hourly wages, just like now.”

You grimace. “O–kay. But Vice Presidents get more authority, right? Will I be someone’s boss?”

“No, you’ll still report to Susan, just like now.”

You snort. “A Vice President, reporting to a mere Senior Sorter?”

“Oh, no,” says Bob. “Susan is also getting promoted—to Senior Vice President of Sorting!”

You lose it. “Bob, this is bullshit. When you said I was getting promoted to Vice President, that created a bunch of probabilistic expectations in my mind: you made me anticipate getting new challenges, more money, and more authority, and then you reveal that you’re just slapping an inflated title on the same old dead-end job. It’s like handing me a blegg, and then saying that it’s a rube that just happens to be blue, furry, and egg-shaped … or telling me you have a dragon in your garage, except that it’s an invisible, silent dragon that doesn’t breathe. You may think you’re being kind to me asking me to believe in an unfalsifiable promotion, but when you replace the symbol with the substance, it’s actually just cruel. Stop fucking with my head! … sir.”

Bob looks offended. “This promotion isn’t unfalsifiable,” he says. “It says, ‘Vice President of Sorting’ right here on the employee roster. That’s an sensory experience that you can make falsifiable predictions about. I’ll even get you business cards that say, ‘Vice President of Sorting.’ That’s another falsifiable prediction. Using language in a way you dislike is not lying. The propositions you claim false—about new job tasks, increased pay and authority—is not what the title is meant to convey, and this is known to everyone involved; it is not a secret.”


Bob kind of has a point. It’s tempting to argue that things like titles and names are part of the map, not the territory. Unless the name is written down. Or spoken aloud (instantiated in sound waves). Or thought about (instantiated in neurons). The map is part of the territory: insisting that the title isn’t part of the “job” and therefore violates the maxim that meaningful beliefs must have testable consequences, doesn’t quite work. Observing the title on the employee roster indeed tightly constrains your anticipated experience of the title on the business card. So, that’s a non-gerrymandered, predictively useful category … right? What is there for a rationalist to complain about?

To see the problem, we must turn to information theory.

Let’s imagine that an abstract Job has four binary properties that can either be high or low—task complexity, pay, authority, and prestige of title—forming a four-dimensional Jobspace. Suppose that two-thirds of Jobs have {complexity: low, pay: low, authority: low, title: low} (which we’ll write more briefly as [low, low, low, low]) and the remaining one-third have {complexity: high, pay: high, authority: high, title: high} (which we’ll write as [high, high, high, high]).

Task complexity and authority are hard to perceive outside of the company, and pay is only negotiated after an offer is made, so people deciding to seek a Job can only make decisions based the Job’s title: but that’s fine, because in the scenario described, you can infer any of the other properties from the title with certainty. Because the properties are either all low or all high, the joint entropy of title and any other property is going to have the same value as either of the individual property entropies, namely ⅔ log₂ 32 + ⅓ log₂ 3 ≈ 0.918 bits.

But since H(pay) = H(title) = H(pay, title), then the mutual information I(pay; title) has the same value, because I(pay; title) = H(pay) + H(title) − H(pay, title) by definition.

Then suppose a lot of companies get Bob’s bright idea: half of the Jobs that used to occupy the point [low, low, low, low] in Jobspace, get their title coordinate changed to high. So now one-third of the Jobs are at [low, low, low, low], another third are at [low, low, low, high], and the remaining third are at [high, high, high, high]. What happens to the mutual information I(pay; title)?

I(pay; title) = H(pay) + H(title) − H(pay, title)
= (⅔ log 32 + ⅓ log 3) + (⅔ log 32 + ⅓ log 3) − 3(⅓ log 3)
= 43 log 32 + 23 log 3 − log 3 ≈ 0.2516 bits.

It went down! Bob and his analogues, having observed that employees and Job-seekers prefer Jobs with high-prestige titles, thought they were being benevolent by making more Jobs have the desired titles. And perhaps they have helped savvy employees who can arbitrage the gap between the new and old worlds by being able to put “Vice President” on their resumés when searching for a new Job.

But from the perspective of people who wanted to use titles as an easily-communicable correlate of the other features of a Job, all that’s actually been accomplished is making language less useful.


In view of the preceding discussion, to “37 Ways That Words Can Be Wrong”, we might wish to append, “38. Your definition draws a boundary around a cluster in an inappropriately ‘thin’ subspace of Thingspace that excludes relevant variables, resulting in fallacies of compression.”

Miyamoto Musashi is quoted:

The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him.

Similarly, the primary thing when you take a word in your lips is your intention to reflect the territory, whatever the means. Whenever you categorize, label, name, define, or draw boundaries, you must cut through to the correct answer in the same movement. If you think only of categorizing, labeling, naming, defining, or drawing boundaries, you will not be able actually to reflect the territory.

Do not ask whether there’s a rule of rationality saying that you shouldn’t call dolphins fish. Ask whether dolphins are fish.

And if you speak overmuch of the Way you will not attain it.

(Thanks to Alicorn, Sarah Constantin, Ben Hoffman, Zvi Mowshowitz, Jessica Taylor, and Michael Vassar for feedback.)