Words as Hidden Inferences

Sup­pose I find a bar­rel, sealed at the top, but with a hole large enough for a hand. I reach in, and feel a small, curved ob­ject. I pull the ob­ject out, and it’s blue—a bluish egg. Next I reach in and feel some­thing hard and flat, with edges—which, when I ex­tract it, proves to be a red cube. I pull out 11 eggs and 8 cubes, and ev­ery egg is blue, and ev­ery cube is red.

Now I reach in and I feel an­other egg-shaped ob­ject. Be­fore I pull it out and look, I have to guess: What will it look like?

The ev­i­dence doesn’t prove that ev­ery egg in the bar­rel is blue, and ev­ery cube is red. The ev­i­dence doesn’t even ar­gue this all that strongly: 19 is not a large sam­ple size. Nonethe­less, I’ll guess that this egg-shaped ob­ject is blue—or as a run­ner-up guess, red. If I guess any­thing else, there’s as many pos­si­bil­ities as dis­t­in­guish­able col­ors—and for that mat­ter, who says the egg has to be a sin­gle shade? Maybe it has a pic­ture of a horse painted on.

So I say “blue”, with a du­tiful patina of hu­mil­ity. For I am a so­phis­ti­cated ra­tio­nal­ist-type per­son, and I keep track of my as­sump­tions and de­pen­den­cies—I guess, but I’m aware that I’m guess­ing… right?

But when a large yel­low striped feline-shaped ob­ject leaps out at me from the shad­ows, I think, “Yikes! A tiger!” Not, “Hm… ob­jects with the prop­er­ties of lar­ge­ness, yel­low­ness, striped­ness, and feline shape, have pre­vi­ously of­ten pos­sessed the prop­er­ties ‘hun­gry’ and ‘dan­ger­ous’, and thus, al­though it is not log­i­cally nec­es­sary, it may be an em­piri­cally good guess that aaau­u­ugh­hhh CRUNCH CRUNCH GULP.”

The hu­man brain, for some odd rea­son, seems to have been adapted to make this in­fer­ence quickly, au­to­mat­i­cally, and with­out keep­ing ex­plicit track of its as­sump­tions.

And if I name the egg-shaped ob­jects “bleggs” (for blue eggs) and the red cubes “rubes”, then, when I reach in and feel an­other egg-shaped ob­ject, I may think: Oh, it’s a blegg, rather than con­sid­er­ing all that prob­lem-of-in­duc­tion stuff.

It is a com­mon mis­con­cep­tion that you can define a word any way you like.

This would be true if the brain treated words as purely log­i­cal con­structs, Aris­totelian classes, and you never took out any more in­for­ma­tion than you put in.

Yet the brain goes on about its work of cat­e­go­riza­tion, whether or not we con­sciously ap­prove. “All hu­mans are mor­tal, Socrates is a hu­man, there­fore Socrates is mor­tal”—thus spake the an­cient Greek philoso­phers. Well, if mor­tal­ity is part of your log­i­cal defi­ni­tion of “hu­man”, you can’t log­i­cally clas­sify Socrates as hu­man un­til you ob­serve him to be mor­tal. But—this is the prob­lem—Aris­to­tle knew perfectly well that Socrates was a hu­man. Aris­to­tle’s brain placed Socrates in the “hu­man” cat­e­gory as effi­ciently as your own brain cat­e­go­rizes tigers, ap­ples, and ev­ery­thing else in its en­vi­ron­ment: Swiftly, silently, and with­out con­scious ap­proval.

Aris­to­tle laid down rules un­der which no one could con­clude Socrates was “hu­man” un­til af­ter he died. Nonethe­less, Aris­to­tle and his stu­dents went on con­clud­ing that liv­ing peo­ple were hu­mans and there­fore mor­tal; they saw dis­t­in­guish­ing prop­er­ties such as hu­man faces and hu­man bod­ies, and their brains made the leap to in­ferred prop­er­ties such as mor­tal­ity.

Mi­sun­der­stand­ing the work­ing of your own mind does not, thank­fully, pre­vent the mind from do­ing its work. Other­wise Aris­totelians would have starved, un­able to con­clude that an ob­ject was ed­ible merely be­cause it looked and felt like a ba­nana.

So the Aris­totelians went on clas­sify­ing en­vi­ron­men­tal ob­jects on the ba­sis of par­tial in­for­ma­tion, the way peo­ple had always done. Stu­dents of Aris­totelian logic went on think­ing ex­actly the same way, but they had ac­quired an er­ro­neous pic­ture of what they were do­ing.

If you asked an Aris­totelian philoso­pher whether Carol the gro­cer was mor­tal, they would say “Yes.” If you asked them how they knew, they would say “All hu­mans are mor­tal, Carol is hu­man, there­fore Carol is mor­tal.” Ask them whether it was a guess or a cer­tainty, and they would say it was a cer­tainty (if you asked be­fore the six­teenth cen­tury, at least). Ask them how they knew that hu­mans were mor­tal, and they would say it was es­tab­lished by defi­ni­tion.

The Aris­totelians were still the same peo­ple, they re­tained their origi­nal na­tures, but they had ac­quired in­cor­rect be­liefs about their own func­tion­ing. They looked into the mir­ror of self-aware­ness, and saw some­thing un­like their true selves: they re­flected in­cor­rectly.

Your brain doesn’t treat words as log­i­cal defi­ni­tions with no em­piri­cal con­se­quences, and so nei­ther should you. The mere act of cre­at­ing a word can cause your mind to al­lo­cate a cat­e­gory, and thereby trig­ger un­con­scious in­fer­ences of similar­ity. Or block in­fer­ences of similar­ity; if I cre­ate two la­bels I can get your mind to al­lo­cate two cat­e­gories. No­tice how I said “you” and “your brain” as if they were differ­ent things?

Mak­ing er­rors about the in­side of your head doesn’t change what’s there; oth­er­wise Aris­to­tle would have died when he con­cluded that the brain was an or­gan for cool­ing the blood. Philo­soph­i­cal mis­takes usu­ally don’t in­terfere with blink-of-an-eye per­cep­tual in­fer­ences.

But philo­soph­i­cal mis­takes can severely mess up the de­liber­ate think­ing pro­cesses that we use to try to cor­rect our first im­pres­sions. If you be­lieve that you can “define a word any way you like”, with­out re­al­iz­ing that your brain goes on cat­e­go­riz­ing with­out your con­scious over­sight, then you won’t take the effort to choose your defi­ni­tions wisely.