Concepts Don’t Work That Way

Part of the se­quence: Ra­tion­al­ity and Philosophy

Philos­o­phy in the Flesh, by Ge­orge Lakoff and Mark John­son, opens with a bang:

The mind is in­her­ently em­bod­ied. Thought is mostly un­con­scious. Ab­stract con­cepts are largely metaphor­i­cal.

Th­ese are three ma­jor find­ings of cog­ni­tive sci­ence. More than two mil­len­nia of a pri­ori philo­soph­i­cal spec­u­la­tion about these as­pects of rea­son are over. Be­cause of these dis­cov­er­ies, philos­o­phy can never be the same again.

When taken to­gether and con­sid­ered in de­tail, these three find­ings… are in­con­sis­tent with cen­tral parts of… an­a­lytic philos­o­phy...

This book asks: What would hap­pen if we started with these em­piri­cal dis­cov­er­ies about the na­ture of mind and con­structed philos­o­phy anew?

...A se­ri­ous ap­pre­ci­a­tion of cog­ni­tive sci­ence re­quires us to re­think philos­o­phy from the be­gin­ning, in a way that would put it more in touch with the re­al­ity of how we think.

So what would hap­pen if we dropped all philo­soph­i­cal meth­ods that were de­vel­oped when we had a Carte­sian view of the mind and of rea­son, and in­stead in­vented philos­o­phy anew given what we now know about the phys­i­cal pro­cesses that pro­duce hu­man rea­son­ing?

What emerges is a philos­o­phy close to the bone. A philo­soph­i­cal per­spec­tive based on our em­piri­cal un­der­stand­ing of the em­bod­i­ment of mind is a philos­o­phy in the flesh, a philos­o­phy that takes ac­count of what we most ba­si­cally are and can be.

Philos­o­phy is a dis­eased dis­ci­pline, but good philos­o­phy can (and must) be done. I’d like to ex­plore how one can do good philos­o­phy, in part by tak­ing cog­ni­tive sci­ence se­ri­ously.

Con­cep­tual Analysis

Let me be­gin with a quick, easy ex­am­ple of how cog­ni­tive sci­ence can in­form our philo­soph­i­cal method­ol­ogy. The ex­am­ple be­low shouldn’t sur­prise any­one who has read A Hu­man’s Guide to Words, but it does illus­trate how mis­guided thou­sands of philo­soph­i­cal works can be due to an ig­no­rance of cog­ni­tive sci­ence.

Con­sider what may be the cen­tral method of 20th cen­tury an­a­lytic philos­o­phy: con­cep­tual anal­y­sis. In its stan­dard form, con­cep­tual anal­y­sis as­sumes (Ram­sey 1992) the “clas­si­cal view” of con­cepts, that a “con­cept C has defi­ni­tional struc­ture in that it is com­posed of sim­pler con­cepts that ex­press nec­es­sary and suffi­cient con­di­tions for fal­ling un­der C.” For ex­am­ple, the con­cept bach­e­lor has the con­stituents un­mar­ried and man. Some­thing falls un­der the con­cept bach­e­lor if and only if it is an un­mar­ried man.

Con­cep­tual anal­y­sis, then, is the at­tempt to ex­am­ine our in­tu­itive con­cepts and ar­rive at defi­ni­tions (in terms of nec­es­sary and suffi­cient con­di­tions) that cap­ture the mean­ing of those con­cepts. De Paul & Ram­sey (1999) ex­plain:

Any­one fa­mil­iar with Plato’s di­alogues knows how [con­cep­tual anal­y­sis] is con­ducted. We see Socrates en­counter some­one who claims to have figured out the true essence of some ab­stract no­tion… the per­son puts for­ward a defi­ni­tion or anal­y­sis of the no­tion in the form of nec­es­sary and suffi­cient con­di­tions that are thought to cap­ture all and only in­stances of the con­cept in ques­tion. Socrates then re­futes his in­ter­locu­tor’s defi­ni­tion of the con­cept by point­ing out var­i­ous coun­terex­am­ples...

For ex­am­ple, in Book I of the Repub­lic, when Cephalus defines jus­tice in a way that re­quires the re­turn­ing of prop­erty and to­tal hon­esty, Socrates re­sponds by point­ing out that it would be un­just to re­turn weapons to a per­son who had gone mad or to tell the whole truth to such a per­son.… [The] pro­posed anal­y­sis is re­jected be­cause it fails to cap­ture our in­tu­itive judg­ments about the na­ture of jus­tice.

After a pro­posed anal­y­sis or defi­ni­tion is over­turned by an in­tu­itive coun­terex­am­ple, the idea is to re­vise or re­place the anal­y­sis with one that is not sub­ject to the coun­terex­am­ple. Coun­terex­am­ples to the new anal­y­sis are sought, the anal­y­sis re­vised if any coun­terex­am­ples are found, and so on...

The prac­tice con­tinues even to­day. Con­sider the con­cep­tual anal­y­sis of knowl­edge. For cen­turies, knowl­edge was con­sid­ered by most to be jus­tified true be­lief (JTB). If Su­san be­lieved X but X wasn’t true, then Su­san couldn’t be said to have knowl­edge of X. Like­wise, if X was true but Su­san didn’t be­lieve X, then she didn’t have knowl­edge of X. And if Su­san be­lieved X and X was true but Su­san had no jus­tifi­ca­tion for be­liev­ing X, then she didn’t re­ally have “knowl­edge,” she just had an ac­ci­den­tally true be­lief. But if Su­san had jus­tified true be­lief of X, then she did have knowl­edge of X.

And then Get­tier (1963) offered some fa­mous coun­terex­am­ples to this anal­y­sis of knowl­edge. Here is a later coun­terex­am­ple, sum­ma­rized by Zagzeb­ski (1994):

...imag­ine that you are driv­ing through a re­gion in which, un­known to you, the in­hab­itants have erected three barn fa­cades for each real barn in an effort to make them­selves look more pros­per­ous. Your eye­sight is nor­mal and re­li­able enough in or­di­nary cir­cum­stances to spot a barn from the road. But in this case the fake barns are in­dis­t­in­guish­able from the real barns at such a dis­tance. As you look at a real barn you form the be­lief ‘That’s a fine barn’. The be­lief is true and jus­tified, but [in­tu­itively, it isn’t knowl­edge].

As in most coun­terex­am­ples to the JTB anal­y­sis of knowl­edge, the coun­terex­am­ple to JTB arises due to “ac­ci­dents” in the sce­nario:

It is only an ac­ci­dent that vi­sual fac­ul­ties nor­mally re­li­able in this sort of situ­a­tion are not re­li­able in this par­tic­u­lar situ­a­tion; and it is an­other ac­ci­dent that you hap­pened to be look­ing at a real barn and hit on the truth any­way… the [counter-ex­am­ple] arises be­cause an ac­ci­dent of bad luck is can­cel­led out by an ac­ci­dent of good luck.

A cot­tage in­dus­try sprung up around these “Get­tier prob­lems,” with philoso­phers propos­ing new sets of nec­es­sary and suffi­cient con­di­tions for knowl­edge, and other philoso­phers rais­ing counter-ex­am­ples to them. Weather­son (2003) de­scribed this cir­cus as “the anal­y­sis of knowl­edge merry-go-round.”

My pur­pose here is not to ex­am­ine Get­tier prob­lems in par­tic­u­lar, but merely to show that the con­struc­tion of con­cep­tual analy­ses in terms of nec­es­sary and suffi­cient con­di­tions is main­stream philo­soph­i­cal prac­tice, and has been for a long time.

Now, let me ex­plain how cog­ni­tive sci­ence un­der­mines this main­stream philo­soph­i­cal prac­tice.

Con­cepts in the Brain

The prob­lem is that the brain doesn’t store con­cepts in terms of nec­es­sary and suffi­cient con­di­tions, so philoso­phers have been us­ing their in­tu­itions to search for some­thing that isn’t there. No won­der philoso­phers have, for over a cen­tury, failed to pro­duce a sin­gle, suc­cess­ful, non-triv­ial con­cep­tual anal­y­sis (Fodor 1981; Mills 2008).

How do psy­chol­o­gists know the brain doesn’t work this way? Mur­phy (2002, p. 16) writes:

The ground­break­ing work of Eleanor Rosch in the 1970s es­sen­tially kil­led the clas­si­cal view, so that it is not now the the­ory of any ac­tual [sci­en­tific] re­searcher...

But be­fore we get to Rosch, let’s look at a differ­ent ex­per­i­ment:

McCloskey and Glucks­berg (1978)… found that when peo­ple were asked to make re­peated cat­e­gory judg­ments such as ‘‘Is an olive a fruit?’’ or ‘‘Is a dog an an­i­mal?’’ there was a sub­set of items that in­di­vi­d­ual sub­jects changed their minds about. That is, if you said that an olive was a fruit on one day, two weeks later you might give the op­po­site an­swer. Nat­u­rally, sub­jects did not do this for cases like ‘‘Is a dog an an­i­mal?’’ or ‘‘Is a rose an an­i­mal?’’ But they did change their minds on bor­der­line cases, such as olive-fruit, and cur­tains-fur­ni­ture. In fact, for items that were in­ter­me­di­ate be­tween clear mem­bers and clear non­mem­bers, McCloskey and Glucks­berg’s sub­jects changed their mind 22% of the time. This may be com­pared to in­con­sis­tent de­ci­sions of un­der 3% for the best ex­am­ples and clear non­mem­bers… Thus, the changes in sub­jects’ de­ci­sions do not re­flect an over­all in­con­sis­tency or lack of at­ten­tion, but a bona fide un­cer­tainty about the bor­der­line mem­bers. In short, many con­cepts are not clear-cut. There are some items that… seem to be “kind of” mem­bers. (Mills 2002, p. 20)

Cat­e­gory-mem­ber­ship for con­cepts in the hu­man brain is not a yes/​no af­fair, as the “nec­es­sary and suffi­cient con­di­tions” ap­proach of the clas­si­cal view as­sumes. In­stead, cat­e­gory mem­ber­ship is fuzzy.

Another prob­lem for the clas­si­cal view is raised by typ­i­cal­ity effects:

Think of a fish, any fish. Did you think of some­thing like a trout or a shark, or did you think of an eel or a flounder? Most peo­ple would ad­mit to think­ing of some­thing like the first: a tor­pedo-shaped ob­ject with small fins, bilat­er­ally sym­met­ri­cal, which swims in the wa­ter by mov­ing its tail from side to side. Eels are much longer, and they slither; flounders are also differ­ently shaped, aren’t sym­met­ri­cal, and move by wav­ing their body in the ver­ti­cal di­men­sion. Although all of these things are tech­ni­cally fish, they do not all seem to be equally good ex­am­ples of fish. The typ­i­cal cat­e­gory mem­bers are the good ex­am­ples — what you nor­mally think of when you think of the cat­e­gory. The atyp­i­cal ob­jects are ones that are known to be mem­bers but that are un­usual in some way… The clas­si­cal view does not have any way of dis­t­in­guish­ing typ­i­cal and atyp­i­cal cat­e­gory mem­bers. Since all the items in the cat­e­gory have met the defi­ni­tion’s crite­ria, all are cat­e­gory mem­bers.

...The sim­plest way to demon­strate this phe­nomenon is sim­ply to ask peo­ple to rate items on how typ­i­cal they think each item is of a cat­e­gory. So, you could give peo­ple a list of fish and ask them to rate how typ­i­cal each one is of the cat­e­gory fish. Rosch (1975) did this task for 10 cat­e­gories and looked to see how much sub­jects agreed with one an­other. She dis­cov­ered that the re­li­a­bil­ity of typ­i­cal­ity rat­ings was an ex­tremely high .97 (where 1.0 would be perfect agree­ment)… In short, peo­ple agree that a trout is a typ­i­cal fish and an eel is an atyp­i­cal one. (Mills 2002, p. 22)

So peo­ple agree that some items are more typ­i­cal cat­e­gory mem­bers than oth­ers, but do these typ­i­cal­ity effects man­i­fest in nor­mal cog­ni­tion and be­hav­ior?

Yes, they do.

Rips, Shoben, and Smith (1973) found that the ease with which peo­ple judged cat­e­gory mem­ber­ship de­pended on typ­i­cal­ity. For ex­am­ple, peo­ple find it very easy to af­firm that a robin is a bird but are much slower to af­firm that a chicken (a less typ­i­cal item) is a bird. This find­ing has also been found with vi­sual stim­uli: Iden­ti­fy­ing a pic­ture of a chicken as a bird takes longer than iden­ti­fy­ing a pic­tured robin (Mur­phy and Brownell 1985; Smith, Balzano, and Walker 1978). The in­fluence of typ­i­cal­ity is not just in iden­ti­fy­ing items as cat­e­gory mem­bers — it also oc­curs with the pro­duc­tion of items from a cat­e­gory. Bat­tig and Mon­tague (1969) performed a very large norm­ing study in which sub­jects were given cat­e­gory names, like fur­ni­ture or pre­cious stone and had to pro­duce ex­am­ples of these cat­e­gories. Th­ese data are still used to­day in choos­ing stim­uli for ex­per­i­ments (though they are limited, as a num­ber of com­mon cat­e­gories were not in­cluded). Mervis, Catlin and Rosch (1976) showed that the items that were most of­ten pro­duced in re­sponse to the cat­e­gory names were the ones rated as typ­i­cal (by other sub­jects). In fact, the av­er­age cor­re­la­tion of typ­i­cal­ity and pro­duc­tion fre­quency across cat­e­gories was .63, which is quite high given all the other vari­ables that af­fect pro­duc­tion.

When peo­ple learn ar­tifi­cial cat­e­gories, they tend to learn the typ­i­cal items be­fore the atyp­i­cal ones (Rosch, Simp­son, and Miller 1976). Fur­ther­more, learn­ing is faster if sub­jects are taught on mostly typ­i­cal items than if they are taught on atyp­i­cal items (Mervis and Pani 1980; Pos­ner and Keele 1968). Thus, typ­i­cal­ity is not just a feel­ing that peo­ple have about some items (“trout good; eels bad”) — it is im­por­tant to the ini­tial learn­ing of the cat­e­gory in a num­ber of re­spects...

Learn­ing is not the end of the in­fluence, how­ever. Typ­i­cal items are more use­ful for in­fer­ences about cat­e­gory mem­bers. For ex­am­ple, imag­ine that you heard that ea­gles had caught some dis­ease. How likely do you think it would be to spread to other birds? Now sup­pose that it turned out to be larks or robins who caught the dis­ease. Rips (1975) found that peo­ple were more likely to in­fer that other birds would catch the dis­ease when a typ­i­cal bird, like robins, had it than when an atyp­i­cal one, like ea­gles, had it… (Mur­phy 2002, p. 23)

(If you want fur­ther ev­i­dence of typ­i­cal­ity effects on cog­ni­tion, see Mur­phy [2002] and Hamp­ton [2008].)

The clas­si­cal view of con­cepts, with its bi­nary cat­e­gory mem­ber­ship, can­not ex­plain typ­i­cal­ity effects.

So the clas­si­cal view of con­cepts must be re­jected, along with any ver­sion of con­cep­tual anal­y­sis that de­pends upon it. (If you doubt that many philoso­phers have done work de­pen­dent on the clas­si­cal view of con­cepts, see here).

To be fair, quite a few philoso­phers have now given up on the clas­si­cal view of con­cepts and the “nec­es­sary and suffi­cient con­di­tions” ap­proach to con­cep­tual anal­y­sis. And of course there are other rea­sons that seek­ing defi­ni­tions stipu­lated as nec­es­sary and suffi­cient con­di­tions can be use­ful. But I wanted to be­gin with a clear and “set­tled” case of how cog­ni­tive sci­ence can un­der­mine a par­tic­u­lar philo­soph­i­cal prac­tice and re­quire that we ask and an­swer philo­soph­i­cal ques­tions differ­ently.

Philos­o­phy by hu­mans must re­spect the cog­ni­tive sci­ence of how hu­mans rea­son.

Next post: Liv­ing Metaphorically

Pre­vi­ous post: When In­tu­itions Are Useful

References

Bat­tig & Mon­tague (1969). Cat­e­gory norms for ver­bal items in 56 cat­e­gories: A repli­ca­tion and ex­ten­sion of the Con­necti­cut cat­e­gory norms. Jour­nal of Ex­per­i­men­tal Psy­chol­ogy Mono­graph, 80 (3, part 2).

Get­tier (1963). Is jus­tified true be­lief knowl­edge? Anal­y­sis, 23: 121-123.

De Paul & Ram­sey (1999). Pre­face. In De Paul & Ram­sey (eds.), Re­think­ing In­tu­ition. Row­man & Lit­tlefield.

Fodor (1981). The pre­sent sta­tus of the in­nate­ness con­tro­versy. In Fodor, Rep­re­sen­ta­tions: Philo­soph­i­cal Es­says on the Foun­da­tions of Cog­ni­tive Science. MIT Press.

Hamp­ton (2008). Con­cepts in hu­man adults. In Mareschal, Quinn, & Lea (eds.), The Mak­ing of Hu­man Con­cepts (pp. 295-313). Oxford Univer­sity Press.

McCloskey and Glucks­berg (1978). Nat­u­ral cat­e­gories: Well defined or fuzzy sets? Me­mory & Cog­ni­tion, 6: 462–472.

Mervis, Catlin & Rosch (1976). Cat­e­go­riza­tion of nat­u­ral ob­jects. An­nual Re­view of Psy­chol­ogy, 32: 89–115.

Mervis & Pani (1980). Ac­qui­si­tion of ba­sic ob­ject cat­e­gories. Cog­ni­tive Psy­chol­ogy, 12: 496–522.

Mills (2008). Are an­a­lytic philoso­phers shal­low and stupid? The Jour­nal of Philosos­phy, 105: 301-319.

Mur­phy (2002). The Big Book of Con­cepts. MIT Press.

Mur­phy & Brownell (1985). Cat­e­gory differ­en­ti­a­tion in ob­ject recog­ni­tion: Typ­i­cal­ity con­straints on the ba­sic cat­e­gory ad­van­tage. Jour­nal of Ex­per­i­men­tal Psy­chol­ogy: Learn­ing, Me­mory, and Cog­ni­tion, 11: 70–84.

Pos­ner & Keele (1968). On the gen­e­sis of ab­stract ideas. Jour­nal of Ex­per­i­men­tal Psy­chol­ogy, 77: 353–363.

Rips (1975). In­duc­tive judg­ments about nat­u­ral cat­e­gories. Jour­nal of Ver­bal Learn­ing and Ver­bal Be­hav­ior, 14: 665–681.

Ram­sey (1992). Pro­to­types and con­cep­tual anal­y­sis. Topoi 11: 59-70.

Rips, Shoben, & Smith (1973). Se­man­tic dis­tance and the ver­ifi­ca­tion of se­man­tic re­la­tions. Jour­nal of Ver­bal Learn­ing and Ver­bal Be­hav­ior, 12: 1–20.

Rosch (1975). Cog­ni­tive rep­re­sen­ta­tions of se­man­tic cat­e­gories. Jour­nal of Ex­per­i­men­tal Psy­chol­ogy: Gen­eral, 104: 192–233.

Rosch, Simp­son, & Miller (1976). Struc­tural bases of typ­i­cal­ity effects. Jour­nal of Ex­per­i­men­tal Psy­chol­ogy: Hu­man Per­cep­tion and Perfor­mance, 2: 491–502.

Smith, Balzano, & Walker (1978). Nom­i­nal, per­cep­tual, and se­man­tic codes in pic­ture cat­e­go­riza­tion. In Cot­ton & Klatzky (eds.), Se­man­tic Fac­tors in Cog­ni­tion (pp. 137–168). Er­lbaum.

Weather­son (2003). What good are coun­terex­am­ples? Philo­soph­i­cal Stud­ies, 115: 1-31.