37 Ways That Words Can Be Wrong

Some reader is bound to de­clare that a bet­ter ti­tle for this post would be “37 Ways That You Can Use Words Un­wisely”, or “37 Ways That Subop­ti­mal Use Of Cat­e­gories Can Have Nega­tive Side Effects On Your Cog­ni­tion”.

But one of the pri­mary les­sons of this gi­gan­tic list is that say­ing “There’s no way my choice of X can be ‘wrong’” is nearly always an er­ror in prac­tice, what­ever the the­ory. You can always be wrong. Even when it’s the­o­ret­i­cally im­pos­si­ble to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for any­thing you do. That’s life.

Be­sides, I can define the word “wrong” to mean any­thing I like—it’s not like a word can be wrong.

Per­son­ally, I think it quite jus­tified to use the word “wrong” when:

  1. A word fails to con­nect to re­al­ity in the first place. Is Socrates a fram­ster? Yes or no? (The Parable of the Dag­ger.)

  2. Your ar­gu­ment, if it worked, could co­erce re­al­ity to go a differ­ent way by choos­ing a differ­ent word defi­ni­tion. Socrates is a hu­man, and hu­mans, by defi­ni­tion, are mor­tal. So if you defined hu­mans to not be mor­tal, would Socrates live for­ever? (The Parable of Hem­lock.)

  3. You try to es­tab­lish any sort of em­piri­cal propo­si­tion as be­ing true “by defi­ni­tion”. Socrates is a hu­man, and hu­mans, by defi­ni­tion, are mor­tal. So is it a log­i­cal truth if we em­piri­cally pre­dict that Socrates should keel over if he drinks hem­lock? It seems like there are log­i­cally pos­si­ble, non-self-con­tra­dic­tory wor­lds where Socrates doesn’t keel over—where he’s im­mune to hem­lock by a quirk of bio­chem­istry, say. Log­i­cal truths are true in all pos­si­ble wor­lds, and so never tell you which pos­si­ble world you live in—and any­thing you can es­tab­lish “by defi­ni­tion” is a log­i­cal truth. (The Parable of Hem­lock.)

  4. You un­con­sciously slap the con­ven­tional la­bel on some­thing, with­out ac­tu­ally us­ing the ver­bal defi­ni­tion you just gave. You know perfectly well that Bob is “hu­man”, even though, on your defi­ni­tion, you can never call Bob “hu­man” with­out first ob­serv­ing him to be mor­tal. (The Parable of Hem­lock.)

  5. The act of la­bel­ing some­thing with a word, dis­guises a challengable in­duc­tive in­fer­ence you are mak­ing. If the last 11 egg-shaped ob­jects drawn have been blue, and the last 8 cubes drawn have been red, it is a mat­ter of in­duc­tion to say this rule will hold in the fu­ture. But if you call the blue eggs “bleggs” and the red cubes “rubes”, you may reach into the bar­rel, feel an egg shape, and think “Oh, a blegg.” (Words as Hid­den In­fer­ences.)

  6. You try to define a word us­ing words, in turn defined with ever-more-ab­stract words, with­out be­ing able to point to an ex­am­ple. “What is red?” “Red is a color.” “What’s a color?” “It’s a prop­erty of a thing?” “What’s a thing? What’s a prop­erty?” It never oc­curs to you to point to a stop sign and an ap­ple. (Ex­ten­sions and In­ten­sions.)

  7. The ex­ten­sion doesn’t match the in­ten­sion. We aren’t con­sciously aware of our iden­ti­fi­ca­tion of a red light in the sky as “Mars”, which will prob­a­bly hap­pen re­gard­less of your at­tempt to define “Mars” as “The God of War”. (Ex­ten­sions and In­ten­sions.)

  8. Your ver­bal defi­ni­tion doesn’t cap­ture more than a tiny frac­tion of the cat­e­gory’s shared char­ac­ter­is­tics, but you try to rea­son as if it does. When the philoso­phers of Plato’s Academy claimed that the best defi­ni­tion of a hu­man was a “feather­less biped”, Dio­genes the Cynic is said to have ex­hibited a plucked chicken and de­clared “Here is Plato’s Man.” The Pla­ton­ists promptly changed their defi­ni­tion to “a feather­less biped with broad nails”. (Similar­ity Clusters.)

  9. You try to treat cat­e­gory mem­ber­ship as all-or-noth­ing, ig­nor­ing the ex­is­tence of more and less typ­i­cal sub­clusters. Ducks and pen­guins are less typ­i­cal birds than robins and pi­geons. In­ter­est­ingly, a be­tween-groups ex­per­i­ment showed that sub­jects thought a dis ease was more likely to spread from robins to ducks on an is­land, than from ducks to robins. (Typ­i­cal­ity and Asym­met­ri­cal Similar­ity.)

  10. A ver­bal defi­ni­tion works well enough in prac­tice to point out the in­tended cluster of similar things, but you nit­pick ex­cep­tions. Not ev­ery hu­man has ten fingers, or wears clothes, or uses lan­guage; but if you look for an em­piri­cal cluster of things which share these char­ac­ter­is­tics, you’ll get enough in­for­ma­tion that the oc­ca­sional nine-fin­gered hu­man won’t fool you. (The Cluster Struc­ture of Thingspace.)

  11. You ask whether some­thing “is” or “is not” a cat­e­gory mem­ber but can’t name the ques­tion you re­ally want an­swered. What is a “man”? Is Bar­ney the Baby Boy a “man”? The “cor­rect” an­swer may de­pend con­sid­er­ably on whether the query you re­ally want an­swered is “Would hem­lock be a good thing to feed Bar­ney?” or “Will Bar­ney make a good hus­band?” (Dis­guised Queries.)

  12. You treat in­tu­itively per­ceived hi­er­ar­chi­cal cat­e­gories like the only cor­rect way to parse the world, with­out re­al­iz­ing that other forms of statis­ti­cal in­fer­ence are pos­si­ble even though your brain doesn’t use them. It’s much eas­ier for a hu­man to no­tice whether an ob­ject is a “blegg” or “rube”; than for a hu­man to no­tice that red ob­jects never glow in the dark, but red furred ob­jects have all the other char­ac­ter­is­tics of bleggs. Other statis­ti­cal al­gorithms work differ­ently. (Neu­ral Cat­e­gories.)

  13. You talk about cat­e­gories as if they are manna fallen from the Pla­tonic Realm, rather than in­fer­ences im­ple­mented in a real brain. The an­cient philoso­phers said “Socrates is a man”, not, “My brain per­cep­tu­ally clas­sifies Socrates as a match against the ‘hu­man’ con­cept”. (How An Al­gorithm Feels From In­side.)

  14. You ar­gue about a cat­e­gory mem­ber­ship even af­ter screen­ing off all ques­tions that could pos­si­bly de­pend on a cat­e­gory-based in­fer­ence. After you ob­serve that an ob­ject is blue, egg-shaped, furred, flex­ible, opaque, lu­mi­nes­cent, and pal­la­dium-con­tain­ing, what’s left to ask by ar­gu­ing, “Is it a blegg?” But if your brain’s cat­e­go­riz­ing neu­ral net­work con­tains a (metaphor­i­cal) cen­tral unit cor­re­spond­ing to the in­fer­ence of blegg-ness, it may still feel like there’s a lef­tover ques­tion. (How An Al­gorithm Feels From In­side.)

  15. You al­low an ar­gu­ment to slide into be­ing about defi­ni­tions, even though it isn’t what you origi­nally wanted to ar­gue about. If, be­fore a dis­pute started about whether a tree fal­ling in a de­serted for­est makes a “sound”, you asked the two soon-to-be ar­guers whether they thought a “sound” should be defined as “acous­tic vibra­tions” or “au­di­tory ex­pe­riences”, they’d prob­a­bly tell you to flip a coin. Only af­ter the ar­gu­ment starts does the defi­ni­tion of a word be­come poli­ti­cally charged. (Disput­ing Defi­ni­tions.)

  16. You think a word has a mean­ing, as a prop­erty of the word it­self; rather than there be­ing a la­bel that your brain as­so­ci­ates to a par­tic­u­lar con­cept. When some­one shouts, “Yikes! A tiger!”, evolu­tion would not fa­vor an or­ganism that thinks, “Hm… I have just heard the syl­la­bles ‘Tie’ and ‘Grr’ which my fel­low tribe­mem­bers as­so­ci­ate with their in­ter­nal analogues of my own tiger con­cept and which aiiieeee CRUNCH CRUNCH GULP.” So the brain takes a short­cut, and it seems that the mean­ing of tiger­ness is a prop­erty of the la­bel it­self. Peo­ple ar­gue about the cor­rect mean­ing of a la­bel like “sound”. (Feel the Mean­ing.)

  17. You ar­gue over the mean­ings of a word, even af­ter all sides un­der­stand perfectly well what the other sides are try­ing to say. The hu­man abil­ity to as­so­ci­ate la­bels to con­cepts is a tool for com­mu­ni­ca­tion. When peo­ple want to com­mu­ni­cate, we’re hard to stop; if we have no com­mon lan­guage, we’ll draw pic­tures in sand. When you each un­der­stand what is in the other’s mind, you are done. (The Ar­gu­ment From Com­mon Usage.)

  18. You pull out a dic­tio­nary in the mid­dle of an em­piri­cal or moral ar­gu­ment. Dic­tionary ed­i­tors are his­to­ri­ans of us­age, not leg­is­la­tors of lan­guage. If the com­mon defi­ni­tion con­tains a prob­lem—if “Mars” is defined as the God of War, or a “dolphin” is defined as a kind of fish, or “Ne­groes” are defined as a sep­a­rate cat­e­gory from hu­mans, the dic­tio­nary will re­flect the stan­dard mis­take. (The Ar­gu­ment From Com­mon Usage.)

  19. You pull out a dic­tio­nary in the mid­dle of any ar­gu­ment ever. Se­ri­ously, what the heck makes you think that dic­tio­nary ed­i­tors are an au­thor­ity on whether “athe­ism” is a “re­li­gion” or what­ever? If you have any sub­stan­tive is­sue what­so­ever at stake, do you re­ally think dic­tio­nary ed­i­tors have ac­cess to ul­ti­mate wis­dom that set­tles the ar­gu­ment? (The Ar­gu­ment From Com­mon Usage.)

  20. You defy com­mon us­age with­out a rea­son, mak­ing it gra­tu­itously hard for oth­ers to un­der­stand you. Fast stand up plu­to­nium, with bagels with­out han­dle. (The Ar­gu­ment From Com­mon Usage.)

  21. You use com­plex re­nam­ings to cre­ate the illu­sion of in­fer­ence. Is a “hu­man” defined as a “mor­tal feather­less biped”? Then write: “All [mor­tal feather­less bipeds] are mor­tal; Socrates is a [mor­tal feather­less biped]; there­fore, Socrates is mor­tal.” Looks less im­pres­sive that way, doesn’t it? (Empty La­bels.)

  22. You get into ar­gu­ments that you could avoid if you just didn’t use the word. If Albert and Barry aren’t al­lowed to use the word “sound”, then Albert will have to say “A tree fal­ling in a de­serted for­est gen­er­ates acous­tic vibra­tions”, and Barry will say “A tree fal­ling in a de­serted for­est gen­er­ates no au­di­tory ex­pe­riences”. When a word poses a prob­lem, the sim­plest solu­tion is to elimi­nate the word and its syn­onyms. (Ta­boo Your Words.)

  23. The ex­is­tence of a neat lit­tle word pre­vents you from see­ing the de­tails of the thing you’re try­ing to think about. What ac­tu­ally goes on in schools once you stop call­ing it “ed­u­ca­tion”? What’s a de­gree, once you stop call­ing it a “de­gree”? If a coin lands “heads”, what’s its ra­dial ori­en­ta­tion? What is “truth”, if you can’t say “ac­cu­rate” or “cor­rect” or “rep­re­sent” or “re­flect” or “se­man­tic” or “be­lieve” or “knowl­edge” or “map” or “real” or any other sim­ple term? (Re­place the Sym­bol with the Sub­stance.)

  24. You have only one word, but there are two or more differ­ent things-in-re­al­ity, so that all the facts about them get dumped into a sin­gle un­differ­en­ti­ated men­tal bucket. It’s part of a de­tec­tive’s or­di­nary work to ob­serve that Carol wore red last night, or that she has black hair; and it’s part of a de­tec­tive’s or­di­nary work to won­der if maybe Carol dyes her hair. But it takes a sub­tler de­tec­tive to won­der if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair. (Fal­la­cies of Com­pres­sion.)

  25. You see pat­terns where none ex­ist, har­vest­ing other char­ac­ter­is­tics from your defi­ni­tions even when there is no similar­ity along that di­men­sion. In Ja­pan, it is thought that peo­ple of blood type A are earnest and cre­ative, blood type Bs are wild and cheer­ful, blood type Os are agree­able and so­cia­ble, and blood type ABs are cool and con­trol­led. (Cat­e­go­riz­ing Has Con­se­quences.)

  26. You try to sneak in the con­no­ta­tions of a word, by ar­gu­ing from a defi­ni­tion that doesn’t in­clude the con­no­ta­tions. A “wig­gin” is defined in the dic­tio­nary as a per­son with green eyes and black hair. The word “wig­gin” also car­ries the con­no­ta­tion of some­one who com­mits crimes and launches cute baby squir­rels, but that part isn’t in the dic­tio­nary. So you point to some­one and say: “Green eyes? Black hair? See, told you he’s a wig­gin! Watch, next he’s go­ing to steal the silver­ware.” (Sneak­ing in Con­no­ta­tions.)

  27. You claim “X, by defi­ni­tion, is a Y!” On such oc­ca­sions you’re al­most cer­tainly try­ing to sneak in a con­no­ta­tion of Y that wasn’t in your given defi­ni­tion. You define “hu­man” as a “feather­less biped”, and point to Socrates and say, “No feathers—two legs—he must be hu­man!” But what you re­ally care about is some­thing else, like mor­tal­ity. If what was in dis­pute was Socrates’s num­ber of legs, the other fel­low would just re­ply, “Whad­daya mean, Socrates’s got two legs? That’s what we’re ar­gu­ing about in the first place!” (Ar­gu­ing “By Defi­ni­tion”.)

  28. You claim “Ps, by defi­ni­tion, are Qs!” If you see Socrates out in the field with some biol­o­gists, gath­er­ing herbs that might con­fer re­sis­tance to hem­lock, there’s no point in ar­gu­ing “Men, by defi­ni­tion, are mor­tal!” The main time you feel the need to tighten the vise by in­sist­ing that some­thing is true “by defi­ni­tion” is when there’s other in­for­ma­tion that calls the de­fault in­fer­ence into doubt. (Ar­gu­ing “By Defi­ni­tion”.)

  29. You try to es­tab­lish mem­ber­ship in an em­piri­cal cluster “by defi­ni­tion”. You wouldn’t feel the need to say, “Hin­duism, by defi­ni­tion, is a re­li­gion!” be­cause, well, of course Hin­duism is a re­li­gion. It’s not just a re­li­gion “by defi­ni­tion”, it’s, like, an ac­tual re­li­gion. Athe­ism does not re­sem­ble the cen­tral mem­bers of the “re­li­gion” cluster, so if it wasn’t for the fact that athe­ism is a re­li­gion by defi­ni­tion, you might go around think­ing that athe­ism wasn’t a re­li­gion. That’s why you’ve got to crush all op­po­si­tion by point­ing out that “Athe­ism is a re­li­gion” is true by defi­ni­tion, be­cause it isn’t true any other way. (Ar­gu­ing “By Defi­ni­tion”.)

  30. Your defi­ni­tion draws a bound­ary around things that don’t re­ally be­long to­gether. You can claim, if you like, that you are defin­ing the word “fish” to re­fer to salmon, gup­pies, sharks, dolphins, and trout, but not jel­lyfish or al­gae. You can claim, if you like, that this is merely a list, and there is no way a list can be “wrong”. Or you can stop play­ing nitwit games and ad­mit that you made a mis­take and that dolphins don’t be­long on the fish list. (Where to Draw the Boundary?)

  31. You use a short word for some­thing that you won’t need to de­scribe of­ten, or a long word for some­thing you’ll need to de­scribe of­ten. This can re­sult in in­effi­cient think­ing, or even mis­ap­pli­ca­tions of Oc­cam’s Ra­zor, if your mind thinks that short sen­tences sound “sim­pler”. Which sounds more plau­si­ble, “God did a mir­a­cle” or “A su­per­nat­u­ral uni­verse-cre­at­ing en­tity tem­porar­ily sus­pended the laws of physics”? (En­tropy, and Short Codes.)

  32. You draw your bound­ary around a vol­ume of space where there is no greater-than-usual den­sity, mean­ing that the as­so­ci­ated word does not cor­re­spond to any performable Bayesian in­fer­ences. Since green-eyed peo­ple are not more likely to have black hair, or vice versa, and they don’t share any other char­ac­ter­is­tics in com­mon, why have a word for “wig­gin”? (Mu­tual In­for­ma­tion, and Den­sity in Thingspace.)

  33. You draw an un­sim­ple bound­ary with­out any rea­son to do so. The act of defin­ing a word to re­fer to all hu­mans, ex­cept black peo­ple, seems kind of sus­pi­cious. If you don’t pre­sent rea­sons to draw that par­tic­u­lar bound­ary, try­ing to cre­ate an “ar­bi­trary” word in that lo­ca­tion is like a de­tec­tive say­ing: “Well, I haven’t the slight­est shred of sup­port one way or the other for who could’ve mur­dered those or­phans… but have we con­sid­ered John Q. Wiffle­heim as a sus­pect?” (Su­per­ex­po­nen­tial Con­ceptspace, and Sim­ple Words.)

  34. You use cat­e­go­riza­tion to make in­fer­ences about prop­er­ties that don’t have the ap­pro­pri­ate em­piri­cal struc­ture, namely, con­di­tional in­de­pen­dence given knowl­edge of the class, to be well-ap­prox­i­mated by Naive Bayes. No way am I try­ing to sum­ma­rize this one. Just read the blog post. (Con­di­tional In­de­pen­dence, and Naive Bayes.)

  35. You think that words are like tiny lit­tle LISP sym­bols in your mind, rather than words be­ing la­bels that act as han­dles to di­rect com­plex men­tal paint­brushes that can paint de­tailed pic­tures in your sen­sory workspace. Vi­su­al­ize a “tri­an­gu­lar light­bulb”. What did you see? (Words as Men­tal Paint­brush Han­dles.)

  36. You use a word that has differ­ent mean­ings in differ­ent places as though it meant the same thing on each oc­ca­sion, pos­si­bly cre­at­ing the illu­sion of some­thing pro­tean and shift­ing. “Martin told Bob the build­ing was on his left.” But “left” is a func­tion-word that eval­u­ates with a speaker-de­pen­dent vari­able grabbed from the sur­round­ing con­text. Whose “left” is meant, Bob’s or Martin’s? (Vari­able Ques­tion Fal­la­cies.)

  37. You think that defi­ni­tions can’t be “wrong”, or that “I can define a word any way I like!” This kind of at­ti­tude teaches you to in­dig­nantly defend your past ac­tions, in­stead of pay­ing at­ten­tion to their con­se­quences, or fes­s­ing up to your mis­takes. (37 Ways That Subop­ti­mal Use Of Cat­e­gories Can Have Nega­tive Side Effects On Your Cog­ni­tion.)

Every­thing you do in the mind has an effect, and your brain races ahead un­con­sciously with­out your su­per­vi­sion.

Say­ing “Words are ar­bi­trary; I can define a word any way I like” makes around as much sense as driv­ing a car over thin ice with the ac­cel­er­a­tor floored and say­ing, “Look­ing at this steer­ing wheel, I can’t see why one ra­dial an­gle is spe­cial—so I can turn the steer­ing wheel any way I like.”

If you’re try­ing to go any­where, or even just try­ing to sur­vive, you had bet­ter start pay­ing at­ten­tion to the three or six dozen op­ti­mal­ity crite­ria that con­trol how you use words, defi­ni­tions, cat­e­gories, classes, bound­aries, la­bels, and con­cepts.