Where to Draw the Boundaries?

Fol­lowup to: Where to Draw the Boundary?

Figur­ing where to cut re­al­ity in or­der to carve along the joints—figur­ing which things are similar to each other, which things are clus­tered to­gether: this is the prob­lem wor­thy of a ra­tio­nal­ist. It is what peo­ple should be try­ing to do, when they set out in search of the float­ing essence of a word.

Once upon a time it was thought that the word “fish” in­cluded dolphins …

The one comes to you and says:

The list: {salmon, gup­pies, sharks, dolphins, trout} is just a list—you can’t say that a list is wrong. You draw cat­e­gory bound­aries in spe­cific ways to cap­ture trade­offs you care about: sailors in the an­cient world wanted a word to de­scribe the swim­ming finned crea­tures that they saw in the sea, which in­cluded salmon, gup­pies, sharks—and dolphins. That group­ing may not be the one fa­vored by mod­ern evolu­tion­ary biol­o­gists, but an al­ter­na­tive cat­e­go­riza­tion sys­tem is not an er­ror, and bor­ders are not ob­jec­tively true or false. You’re not stand­ing in defense of truth if you in­sist on a word, brought ex­plic­itly into ques­tion, be­ing used with some par­tic­u­lar mean­ing. So my defi­ni­tion of fish can­not pos­si­bly be ‘wrong,’ as you claim. I can define a word any way I want—in ac­cor­dance with my val­ues!

So, there is a le­gi­t­i­mate com­plaint here. It’s true that sailors in the an­cient world had a le­gi­t­i­mate rea­son to want a word in their lan­guage whose ex­ten­sion was {salmon, gup­pies, sharks, dolphins, …}. (And mod­ern schol­ars writ­ing a trans­la­tion for pre­sent-day English speak­ers might even trans­late that word as fish, be­cause most mem­bers of that cat­e­gory are what we would call fish.) It in­deed would not nec­es­sar­ily be helping the sailors to tell them that they need to ex­clude dolphins from the ex­ten­sion of that word, and in­stead in­clude dolphins in the ex­ten­sion of their word for {mon­keys, squir­rels, horses …}. Like­wise, most mod­ern biol­o­gists have lit­tle use for a word that groups dolphins and gup­pies to­gether.

When ra­tio­nal­ists say that defi­ni­tions can be wrong, we don’t mean that there’s a unique cat­e­gory bound­ary that is the True float­ing essence of a word, and that all other pos­si­ble bound­aries are wrong. We mean that in or­der for a pro­posed cat­e­gory bound­ary to not be wrong, it needs to cap­ture some statis­ti­cal struc­ture in re­al­ity, even if re­al­ity is sur­pris­ingly de­tailed and there can be more than one such struc­ture.

The rea­son that the sailor’s con­cept of wa­ter-dwelling an­i­mals isn’t nec­es­sar­ily wrong (at least within a par­tic­u­lar do­main of ap­pli­ca­tion) is be­cause dolphins and fish ac­tu­ally do have things in com­mon due to con­ver­gent evolu­tion, de­spite their differ­ing an­ces­tries. If we’ve been told that “dolphins” are wa­ter-dwellers, we can cor­rectly pre­dict that they’re likely to have fins and a hy­dro­dy­namic shape, even if we’ve never seen a dolphin our­selves. On the other hand, if we pre­dict that dolphins prob­a­bly lay eggs be­cause 97% of known fish species are oviparous, we’d get the wrong an­swer.

A stan­dard tech­nique for un­der­stand­ing why some ob­jects be­long in the same “cat­e­gory” is to (pre­tend that we can) vi­su­al­ize ob­jects as ex­ist­ing in a very-high-di­men­sional con­figu­ra­tion space, but this “Thingspace” isn’t par­tic­u­larly well-defined: we want to map ev­ery prop­erty of an ob­ject to a di­men­sion in our ab­stract space, but it’s not clear how one would enu­mer­ate all pos­si­ble “prop­er­ties.” But this isn’t a ma­jor con­cern: we can form a space with what­ever prop­er­ties or vari­ables we hap­pen to be in­ter­ested in. Differ­ent choices of prop­er­ties cor­re­spond to differ­ent cross sec­tions of the grander Thingspace. Ex­clud­ing prop­er­ties from a col­lec­tion would re­sult in a “thin­ner”, lower-di­men­sional sub­space of the space defined by the origi­nal col­lec­tion of prop­er­ties, which would in turn be a sub­space of grander Thingspace, just as a line is a sub­space of a plane, and a plane is a sub­space of three-di­men­sional space.

Con­cern­ing dolphins: there would be a cluster of wa­ter-dwelling an­i­mals in the sub­space of di­men­sions that wa­ter-dwelling an­i­mals are similar on, and a cluster of mam­mals in the sub­space of di­men­sions that mam­mals are similar on, and dolphins would be­long to both of them, just as the vec­tor [1.1, 2.1, 9.1, 10.2] in the four-di­men­sional vec­tor space ℝ⁴ is si­mul­ta­neously close to [1, 2, 2, 1] in the sub­space spanned by x₁ and x₂, and close to [8, 9, 9, 10] in the sub­space spanned by x₃ and x₄.

Hu­mans are already func­tion­ing in­tel­li­gences (well, sort of), so the cat­e­gories that hu­mans pro­pose of their own ac­cord won’t be max­i­mally wrong: no one would try to pro­pose a word for “con­figu­ra­tions of mat­ter that match any of these 29,122 five-megabyte de­scrip­tions but have no other par­tic­u­lar prop­er­ties in com­mon.” (In­deed, be­cause we are not-su­per­ex­po­nen­tially-vast minds that evolved to func­tion in a sim­ple, or­dered uni­verse, it ac­tu­ally takes some in­ge­nu­ity to con­struct a cat­e­gory that wrong.)

This leaves as­piring in­struc­tors of ra­tio­nal­ity in some­thing of a predica­ment: in or­der to teach peo­ple how cat­e­gories can be more or (ahem) less wrong, you need some sort of illus­tra­tive ex­am­ple, but since the most nat­u­ral illus­tra­tive ex­am­ples won’t be max­i­mally wrong, some peo­ple might fail to ap­pre­ci­ate the les­son, leav­ing one of your stu­dents to fill in the gap in your lec­ture se­ries eleven years later.

The ped­a­gog­i­cal func­tion of tel­ling peo­ple to “stop play­ing nitwit games and ad­mit that dolphins don’t be­long on the fish list” is to point out that, with­out deny­ing the ob­vi­ous similar­i­ties that mo­ti­vated the ini­tial cat­e­go­riza­tion {salmon, gup­pies, sharks, dolphins, trout, …}, there is more struc­ture in the world: to max­i­mize the (log­a­r­ithm of the) prob­a­bil­ity your world-model as­signs to your ob­ser­va­tions of dolphins, you need to take into con­sid­er­a­tion the many as­pects of re­al­ity in which the group­ing {mon­keys, squir­rels, dolphins, horses …} makes more sense. To the ex­tent that rely­ing on the ini­tial cat­e­gory guess would re­sult in a worse Bayes-score, we might say that that cat­e­gory is “wrong.” It might have been “good enough” for the pur­poses of the sailors of yore, but as hu­man­ity has learned more, as our model of Thingspace has ex­panded with more di­men­sions and more de­tails, we can see the ways in which the origi­nal map failed to carve re­al­ity at the joints.


The one replies:

But re­al­ity doesn’t come with its joints pre-la­beled. Ques­tions about how to draw cat­e­gory bound­aries are best un­der­stood as ques­tions about val­ues or pri­ori­ties rather than about the ac­tual con­tent of the ac­tual world. I can call dolphins “fish” and go on to make just as ac­cu­rate pre­dic­tions about dolphins as you can. Every­thing we iden­tify as a joint is only a joint be­cause we care about it.

No. Every­thing we iden­tify as a joint is a joint not “be­cause we care about it”, but be­cause it helps us think about the things we care about.

Which di­men­sions of Thingspace you bother pay­ing at­ten­tion to might de­pend on your val­ues, and the clusters re­turned by your brain’s similar­ity-de­tec­tion al­gorithms might “split” or “col­lapse” ac­cord­ing to which sub­space you’re look­ing at. But in or­der for your map to be use­ful in the ser­vice of your val­ues, it needs to re­flect the statis­ti­cal struc­ture of things in the ter­ri­tory—which de­pends on the ter­ri­tory, not your val­ues.

There is an im­por­tant differ­ence be­tween “not in­clud­ing moun­tains on a map be­cause it’s a poli­ti­cal map that doesn’t show any moun­tains” and “not in­clud­ing Mt. Ever­est on a ge­o­graphic map, be­cause my sister died try­ing to climb Ever­est and see­ing it on the map would make me feel sad.”

There is an im­por­tant differ­ence be­tween “iden­ti­fy­ing this pill as not be­ing ‘poi­son’ al­lows me to fo­cus my un­cer­tainty about what I’ll ob­serve af­ter ad­minis­ter­ing the pill to a hu­man (even if most pos­si­ble minds have never seen a ‘hu­man’ and would never waste cy­cles imag­in­ing ad­minis­ter­ing the pill to one)” and “iden­ti­fy­ing this pill as not be­ing ‘poi­son’, be­cause if I pub­li­cly called it ‘poi­son’, then the man­u­fac­turer of the pill might sue me.”

There is an im­por­tant differ­ence be­tween hav­ing a util­ity func­tion defined over a statis­ti­cal model’s perfor­mance against spe­cific real-world data (even if an­other mind with differ­ent val­ues would be in­ter­ested in differ­ent data), and hav­ing a util­ity func­tion defined over fea­tures of the model it­self.

Re­mem­ber how ap­peal­ing to the dic­tio­nary is ir­ra­tional when the ac­tual mo­ti­va­tion for an ar­gu­ment is about whether to in­fer a prop­erty on the ba­sis of cat­e­gory-mem­ber­ship? But at least the dic­tio­nary has the virtue of doc­u­ment­ing typ­i­cal us­age of our shared com­mu­ni­ca­tion sig­nals: you can at least see how “You’re defect­ing from com­mon us­age” might feel like a sen­si­ble thing to say, even if one’s true re­jec­tion lies el­se­where. In con­trast, this mo­tion of ap­peal­ing to per­sonal val­ues (!?!) is so de­ranged that Yud­kowsky ap­par­ently didn’t even re­al­ize in 2008 that he might need to warn us against it!

You can’t change the cat­e­gories your mind ac­tu­ally uses and still perform as well on pre­dic­tion tasks—al­though you can change your ver­bally re­ported cat­e­gories, much as how one can ver­bally re­port “be­liev­ing” in an in­visi­ble, inaudible, flour-per­me­able dragon in one’s garage with­out hav­ing any false an­ti­ci­pa­tions-of-ex­pe­rience about the garage.

This may be eas­ier to see with a sim­ple nu­mer­i­cal ex­am­ple.

Sup­pose we have some en­tities that ex­ist in the three-di­men­sional vec­tor space ℝ³. There’s one cluster of en­tities cen­tered at [1, 2, 3], and we call those en­tities Foos, and there’s an­other cluster of en­tities cen­tered at [2, 4, 6], which we call Qu­uxes.

The one comes and says, “Well, I’m go­ing re­define the mean­ing of ‘Foo’ such that it also in­cludes the things near [2, 4, 6] as well as the Foos-with-re­spect-to-the-old-defi­ni­tion, and you can’t say my new defi­ni­tion is wrong, be­cause if I ob­serve [2, _, _] (where the un­der­scores rep­re­sent yet-un­ob­served vari­ables), I’m go­ing to cat­e­go­rize that en­tity as a Foo but still pre­dict that the un­ob­served vari­ables are 4 and 6, so there.”

But if the one were ac­tu­ally us­ing the new con­cept of Foo in­ter­nally and not just say­ing the words “cat­e­go­rize it as a Foo”, they wouldn’t pre­dict 4 and 6! They’d pre­dict 3 and 4.5, be­cause those are the av­er­age val­ues of a generic Foo-with-re­spect-to-the-new-defi­ni­tion in the 2nd and 3rd co­or­di­nates (be­cause (2+4)/​2 = 62 = 3 and (3+6)/​2 = 92 = 4.5). (The already-ob­served 2 in the first co­or­di­nate isn’t av­er­age, but by con­di­tional in­de­pen­dence, that only af­fects our pre­dic­tion of the other two vari­ables by means of its effect on our “pre­dic­tion” of cat­e­gory-mem­ber­ship.) The cluster-struc­ture knowl­edge that “en­tities for which x₁≈2, also tend to have x₂≈4 and x₃≈6” needs to be rep­re­sented some­where in the one’s mind in or­der to get the right an­swer. And given that that knowl­edge needs to be rep­re­sented, it might also be use­ful to have a word for “the things near [2, 4, 6]” in or­der to effi­ciently share that knowl­edge with oth­ers.

Of course, there isn’t go­ing to be a unique way to en­code the knowl­edge into nat­u­ral lan­guage: there’s no rea­son the word/​sym­bol “Foo” needs to rep­re­sent “the stuff near [1, 2, 3]” rather than “both the stuff near [1, 2, 3] and also the stuff near [2, 4, 6]“. And you might very well in­deed want a short word like “Foo” that en­com­passes both clusters, for ex­am­ple, if you want to con­trast them to an­other cluster much farther away, or if you’re mostly in­ter­ested in x₁ and the differ­ence be­tween x₁≈1 and x₁≈2 doesn’t seem large enough to no­tice.

But if speak­ers of par­tic­u­lar lan­guage were already us­ing “Foo” to speci­fi­cally talk about the stuff near [1, 2, 3], then you can’t swap in a new defi­ni­tion of “Foo” with­out chang­ing the truth val­ues of sen­tences in­volv­ing the word “Foo.” Or rather: sen­tences in­volv­ing Foo-with-re­spect-to-the-old-defi­ni­tion are differ­ent propo­si­tions from sen­tences in­volv­ing Foo-with-re­spect-to-the-new-defi­ni­tion, even if they get writ­ten down us­ing the same sym­bols in the same or­der.

Nat­u­rally, all this be­comes much more com­pli­cated as we move away from the sim­plest ideal­ized ex­am­ples.

For ex­am­ple, if the points are more evenly dis­tributed in con­figu­ra­tion space rather than be­long­ing to cleanly-dis­t­in­guish­able clusters, then es­sen­tial­ist “X is a Y” cog­ni­tive al­gorithms perform less well, and we get Sorites para­dox-like situ­a­tions, where we know roughly what we mean by a word, but are con­fronted with real-world (not merely hy­po­thet­i­cal) edge cases that we’re not sure how to clas­sify.

Or it might not be ob­vi­ous which di­men­sions of Thingspace are most rele­vant.

Or there might be so­cial or psy­cholog­i­cal forces an­chor­ing word us­ages on iden­ti­fi­able Schel­ling points that are easy for differ­ent peo­ple to agree upon, even at the cost of some statis­ti­cal “fit.”

We could go on list­ing more such com­pli­ca­tions, where we seem to be faced with some­what ar­bi­trary choices about how to de­scribe the world in lan­guage. But the fun­da­men­tal thing is this: the map is not the ter­ri­tory. Ar­bi­trari­ness in the map (what color should Texas be?) doesn’t cor­re­spond to ar­bi­trari­ness in the ter­ri­tory. Where the struc­ture of hu­man nat­u­ral lan­guage doesn’t fit the struc­ture in re­al­ity—where we’re not sure whether to say that a suffi­ciently small col­lec­tion of sand “is a heap”, be­cause we don’t know how to spec­ify the po­si­tions of the in­di­vi­d­ual grains of sand, or com­pute that the col­lec­tion has a Stan­dard Heap-ness Coeffi­cient of 0.64—that’s just a bug in our hu­man power of vibra­tory telepa­thy. You can ex­ploit the bug to con­fuse hu­mans, but that doesn’t change re­al­ity.

Some­times we might wish that some­thing to be­longed to a cat­e­gory that it doesn’t (with re­spect to the cat­e­gory bound­aries that we would or­di­nar­ily use), so it’s tempt­ing to avert our at­ten­tion from this painful re­al­ity with ap­peal-to-ar­bi­trari­ness lan­guage-lawyer­ing, se­lec­tively ap­ply­ing our philos­o­phy-of-lan­guage skills to pre­tend that we can define a word any way we want with no con­se­quences. (“I’m not late!—well, okay, we agree that I ar­rived half an hour af­ter the sched­uled start time, but whether I was late de­pends on how you choose to draw the cat­e­gory bound­aries of ‘late’, which is sub­jec­tive.“)

For this rea­son it is said that know­ing about philos­o­phy of lan­guage can hurt peo­ple. Those who know that words don’t have in­trin­sic defi­ni­tions, but don’t know (or have seem­ingly for­got­ten) about the three or six dozen op­ti­mal­ity crite­ria gov­ern­ing the use of words, can eas­ily fash­ion them­selves a Fully Gen­eral Coun­ter­ar­gu­ment against any claim of the form “X is a Y”—

Y doesn’t un­am­bigu­ously re­fer to the thing you’re try­ing to point at. There’s no Pla­tonic essence of Y-ness: once we know any par­tic­u­lar fact about X we want to know, there’s no ques­tion left to ask. Clearly, you don’t un­der­stand how words work, there­fore I don’t need to con­sider whether there are any non-on­tolog­i­cally-con­fused rea­sons for some­one to say “X is a Y.”

Iso­lated de­mands for rigor are great for win­ning ar­gu­ments against hu­mans who aren’t as philo­soph­i­cally so­phis­ti­cated as you, but the evolved sys­tems of per­cep­tion and lan­guage by which hu­mans pro­cess and com­mu­ni­cate in­for­ma­tion about re­al­ity, pre­date the Se­quences. Every claim that X is a Y is an ex­pres­sion of cog­ni­tive work that can­not sim­ply be dis­missed just be­cause most claimants doesn’t know how they work. Pla­tonic essences are just the limit­ing case as the over­lap be­tween clusters in Thingspace goes to zero.

You should never say, “The choice of word is ar­bi­trary; there­fore I can say what­ever I want”—which amounts to, “The choice of cat­e­gory is ar­bi­trary, there­fore I can be­lieve what­ever I want.” If the choice were re­ally ar­bi­trary, you would be satis­fied with the choice be­ing made ar­bi­trar­ily: by flip­ping a coin, or call­ing a ran­dom num­ber gen­er­a­tor. (It doesn’t mat­ter which.) What­ever crite­rion your brain is us­ing to de­cide which word or be­lief you want, is your non-ar­bi­trary rea­son.

If what you want isn’t cur­rently true in re­al­ity, maybe there’s some ac­tion you could take to make it be­come true. To search for that ac­tion, you’re go­ing to need ac­cu­rate be­liefs about what re­al­ity is cur­rently like. To en­list the help of oth­ers in your plan­ning, you’re go­ing to need pre­cise ter­minol­ogy to com­mu­ni­cate ac­cu­rate be­liefs about what re­al­ity is cur­rently like. Even when—es­pe­cially when—the cur­rent re­al­ity is in­con­ve­nient.

Even when it hurts.

(Oh, and if you’re ac­tu­ally try­ing to op­ti­mize other peo­ple’s mod­els of the world, rather than the world it­self—you could just lie, rather than play­ing clever cat­e­gory-ger­ry­man­der­ing mind games. It would be a lot sim­pler!)


Imag­ine that you’ve had a pe­cu­liar job in a pe­cu­liar fac­tory for a long time. After many mind-numb­ing years of sort­ing bleggs and rubes all day and en­dur­ing be­ing trol­led by Su­san the Se­nior Sorter and her evil sense of hu­mor, you fi­nally work up the courage to ask Bob the Big Boss for a pro­mo­tion.

“Sure,” Bob says. “Start­ing to­mor­row, you’re our new Vice Pres­i­dent of Sort­ing!”

“Wow, this is amaz­ing,” you say. “I don’t know what to ask first! What will my new re­spon­si­bil­ities be?”

“Oh, your re­spon­si­bil­ities will be the same: sort bleggs and rubes ev­ery Mon­day through Fri­day from 9 a.m. to 5 p.m.

You frown. “Okay. But Vice Pres­i­dents get paid a lot, right? What will my salary be?”

“Still $9.50 hourly wages, just like now.”

You gri­mace. “O–kay. But Vice Pres­i­dents get more au­thor­ity, right? Will I be some­one’s boss?”

“No, you’ll still re­port to Su­san, just like now.”

You snort. “A Vice Pres­i­dent, re­port­ing to a mere Se­nior Sorter?”

“Oh, no,” says Bob. “Su­san is also get­ting pro­moted—to Se­nior Vice Pres­i­dent of Sort­ing!”

You lose it. “Bob, this is bul­lshit. When you said I was get­ting pro­moted to Vice Pres­i­dent, that cre­ated a bunch of prob­a­bil­is­tic ex­pec­ta­tions in my mind: you made me an­ti­ci­pate get­ting new challenges, more money, and more au­thor­ity, and then you re­veal that you’re just slap­ping an in­flated ti­tle on the same old dead-end job. It’s like hand­ing me a blegg, and then say­ing that it’s a rube that just hap­pens to be blue, furry, and egg-shaped … or tel­ling me you have a dragon in your garage, ex­cept that it’s an in­visi­ble, silent dragon that doesn’t breathe. You may think you’re be­ing kind to me ask­ing me to be­lieve in an un­falsifi­able pro­mo­tion, but when you re­place the sym­bol with the sub­stance, it’s ac­tu­ally just cruel. Stop fuck­ing with my head! … sir.”

Bob looks offended. “This pro­mo­tion isn’t un­falsifi­able,” he says. “It says, ‘Vice Pres­i­dent of Sort­ing’ right here on the em­ployee ros­ter. That’s an sen­sory ex­pe­rience that you can make falsifi­able pre­dic­tions about. I’ll even get you busi­ness cards that say, ‘Vice Pres­i­dent of Sort­ing.’ That’s an­other falsifi­able pre­dic­tion. Us­ing lan­guage in a way you dis­like is not ly­ing. The propo­si­tions you claim false—about new job tasks, in­creased pay and au­thor­ity—is not what the ti­tle is meant to con­vey, and this is known to ev­ery­one in­volved; it is not a se­cret.”


Bob kind of has a point. It’s tempt­ing to ar­gue that things like ti­tles and names are part of the map, not the ter­ri­tory. Un­less the name is writ­ten down. Or spo­ken aloud (in­stan­ti­ated in sound waves). Or thought about (in­stan­ti­ated in neu­rons). The map is part of the ter­ri­tory: in­sist­ing that the ti­tle isn’t part of the “job” and there­fore vi­o­lates the maxim that mean­ingful be­liefs must have testable con­se­quences, doesn’t quite work. Ob­serv­ing the ti­tle on the em­ployee ros­ter in­deed tightly con­strains your an­ti­ci­pated ex­pe­rience of the ti­tle on the busi­ness card. So, that’s a non-ger­ry­man­dered, pre­dic­tively use­ful cat­e­gory … right? What is there for a ra­tio­nal­ist to com­plain about?

To see the prob­lem, we must turn to in­for­ma­tion the­ory.

Let’s imag­ine that an ab­stract Job has four bi­nary prop­er­ties that can ei­ther be high or low—task com­plex­ity, pay, au­thor­ity, and pres­tige of ti­tle—form­ing a four-di­men­sional Job­space. Sup­pose that two-thirds of Jobs have {com­plex­ity: low, pay: low, au­thor­ity: low, ti­tle: low} (which we’ll write more briefly as [low, low, low, low]) and the re­main­ing one-third have {com­plex­ity: high, pay: high, au­thor­ity: high, ti­tle: high} (which we’ll write as [high, high, high, high]).

Task com­plex­ity and au­thor­ity are hard to per­ceive out­side of the com­pany, and pay is only ne­go­ti­ated af­ter an offer is made, so peo­ple de­cid­ing to seek a Job can only make de­ci­sions based the Job’s ti­tle: but that’s fine, be­cause in the sce­nario de­scribed, you can in­fer any of the other prop­er­ties from the ti­tle with cer­tainty. Be­cause the prop­er­ties are ei­ther all low or all high, the joint en­tropy of ti­tle and any other prop­erty is go­ing to have the same value as ei­ther of the in­di­vi­d­ual prop­erty en­tropies, namely ⅔ log₂ 32 + ⅓ log₂ 3 ≈ 0.918 bits.

But since H(pay) = H(ti­tle) = H(pay, ti­tle), then the mu­tual in­for­ma­tion I(pay; ti­tle) has the same value, be­cause I(pay; ti­tle) = H(pay) + H(ti­tle) − H(pay, ti­tle) by defi­ni­tion.

Then sup­pose a lot of com­pa­nies get Bob’s bright idea: half of the Jobs that used to oc­cupy the point [low, low, low, low] in Job­space, get their ti­tle co­or­di­nate changed to high. So now one-third of the Jobs are at [low, low, low, low], an­other third are at [low, low, low, high], and the re­main­ing third are at [high, high, high, high]. What hap­pens to the mu­tual in­for­ma­tion I(pay; ti­tle)?

I(pay; ti­tle) = H(pay) + H(ti­tle) − H(pay, ti­tle)
= (⅔ log 32 + ⅓ log 3) + (⅔ log 32 + ⅓ log 3) − 3(⅓ log 3)
= 43 log 32 + 23 log 3 − log 3 ≈ 0.2516 bits.

It went down! Bob and his analogues, hav­ing ob­served that em­ploy­ees and Job-seek­ers pre­fer Jobs with high-pres­tige ti­tles, thought they were be­ing benev­olent by mak­ing more Jobs have the de­sired ti­tles. And per­haps they have helped savvy em­ploy­ees who can ar­bi­trage the gap be­tween the new and old wor­lds by be­ing able to put “Vice Pres­i­dent” on their re­sumés when search­ing for a new Job.

But from the per­spec­tive of peo­ple who wanted to use ti­tles as an eas­ily-com­mu­ni­ca­ble cor­re­late of the other fea­tures of a Job, all that’s ac­tu­ally been ac­com­plished is mak­ing lan­guage less use­ful.


In view of the pre­ced­ing dis­cus­sion, to “37 Ways That Words Can Be Wrong”, we might wish to ap­pend, “38. Your defi­ni­tion draws a bound­ary around a cluster in an in­ap­pro­pri­ately ‘thin’ sub­space of Thingspace that ex­cludes rele­vant vari­ables, re­sult­ing in fal­la­cies of com­pres­sion.”

Miyamoto Musashi is quoted:

The pri­mary thing when you take a sword in your hands is your in­ten­tion to cut the en­emy, what­ever the means. When­ever you parry, hit, spring, strike or touch the en­emy’s cut­ting sword, you must cut the en­emy in the same move­ment. It is es­sen­tial to at­tain this. If you think only of hit­ting, spring­ing, strik­ing or touch­ing the en­emy, you will not be able ac­tu­ally to cut him.

Similarly, the pri­mary thing when you take a word in your lips is your in­ten­tion to re­flect the ter­ri­tory, what­ever the means. When­ever you cat­e­go­rize, la­bel, name, define, or draw bound­aries, you must cut through to the cor­rect an­swer in the same move­ment. If you think only of cat­e­go­riz­ing, la­bel­ing, nam­ing, defin­ing, or draw­ing bound­aries, you will not be able ac­tu­ally to re­flect the ter­ri­tory.

Do not ask whether there’s a rule of ra­tio­nal­ity say­ing that you shouldn’t call dolphins fish. Ask whether dolphins are fish.

And if you speak over­much of the Way you will not at­tain it.

(Thanks to Ali­corn, Sarah Con­stantin, Ben Hoff­man, Zvi Mow­show­itz, Jes­sica Tay­lor, and Michael Vas­sar for feed­back.)