Truly Part Of You

A clas­sic pa­per by Drew McDer­mott, “Ar­tifi­cial In­tel­li­gence Meets Nat­u­ral Stu­pidity,” crit­i­cized AI pro­grams that would try to rep­re­sent no­tions like hap­piness is a state of mind us­ing a se­man­tic net­work:

And of course there’s noth­ing in­side the HAPPINESS node; it’s just a naked listo­ken with a sug­ges­tive English name.

So, McDer­mott says, “A good test for the dis­ci­plined pro­gram­mer is to try us­ing gen­syms in key places and see if he still ad­mires his sys­tem. For ex­am­ple, if STATE-OF-MIND is re­named G1073. . .” then we would have IS-A(HAPPINESS, G1073)“which looks much more du­bi­ous.”

Or as I would slightly rephrase the idea: If you sub­sti­tuted ran­dom­ized sym­bols for all the sug­ges­tive English names, you would be com­pletely un­able to figure out what G1071(G1072, G1073) meant. Was the AI pro­gram meant to rep­re­sent ham­burg­ers? Ap­ples? Hap­piness? Who knows? If you delete the sug­ges­tive English names, they don’t grow back.

Sup­pose a physi­cist tells you that “Light is waves,” and you be­lieve the physi­cist. You now have a lit­tle net­work in your head that says:


As McDer­mott says, “The whole prob­lem is get­ting the hearer to no­tice what it has been told. Not ‘un­der­stand,’ but ‘no­tice.’ ” Sup­pose that in­stead the physi­cist told you, “Light is made of lit­tle curvy things.”1 Would you no­tice any differ­ence of an­ti­ci­pated ex­pe­rience?

How can you re­al­ize that you shouldn’t trust your seem­ing knowl­edge that “light is waves”? One test you could ap­ply is ask­ing, “Could I re­gen­er­ate his knowl­edge if it were some­how deleted from my mind?”

This is similar in spirit to scram­bling the names of sug­ges­tively named lisp to­kens in your AI pro­gram, and see­ing if some­one else can figure out what they allegedly “re­fer” to. It’s also similar in spirit to ob­serv­ing that an Ar­tifi­cial Arith­meti­cian pro­grammed to record and play back

Plus-Of(Seven, Six) = Thir­teen

can’t re­gen­er­ate the knowl­edge if you delete it from mem­ory, un­til an­other hu­man re-en­ters it in the database. Just as if you for­got that “light is waves,” you couldn’t get back the knowl­edge ex­cept the same way you got the knowl­edge to be­gin with—by ask­ing a physi­cist. You couldn’t gen­er­ate the knowl­edge for your­self, the way that physi­cists origi­nally gen­er­ated it.

The same ex­pe­riences that lead us to for­mu­late a be­lief, con­nect that be­lief to other knowl­edge and sen­sory in­put and mo­tor out­put. If you see a beaver chew­ing a log, then you know what this thing-that-chews-through-logs looks like, and you will be able to rec­og­nize it on fu­ture oc­ca­sions whether it is called a “beaver” or not. But if you ac­quire your be­liefs about beavers by some­one else tel­ling you facts about “beavers,” you may not be able to rec­og­nize a beaver when you see one.

This is the ter­rible dan­ger of try­ing to tell an ar­tifi­cial in­tel­li­gence facts that it could not learn for it­self. It is also the ter­rible dan­ger of try­ing to tell some­one about physics that they can­not ver­ify for them­selves. For what physi­cists mean by “wave” is not “lit­tle squig­gly thing” but a purely math­e­mat­i­cal con­cept.

As Don­ald David­son ob­serves, if you be­lieve that “beavers” live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs

about beavers, true or false. Your be­lief about “beavers” is not right enough to be wrong.2 If you don’t have enough ex­pe­rience to re­gen­er­ate be­liefs when they are deleted, then do you have enough ex­pe­rience to con­nect that be­lief to any­thing at all? Wittgen­stein: “A wheel that can be turned though noth­ing else moves with it, is not part of the mechanism.”

Al­most as soon as I started read­ing about AI—even be­fore I read McDer­mott—I re­al­ized it would be a re­ally good idea to always ask my­self: “How would I re­gen­er­ate this knowl­edge if it were deleted from my mind?”

The deeper the dele­tion, the stric­ter the test. If all proofs of the Pythagorean The­o­rem were deleted from my mind, could I re-prove it? I think so. If all knowl­edge of the Pythagorean The­o­rem were deleted from my mind, would I no­tice the Pythagorean The­o­rem to re-prove? That’s harder to boast, with­out putting it to the test; but if you handed me a right tri­an­gle with sides of length 3 and 4, and told me that the length of the hy­potenuse was calcu­la­ble, I think I would be able to calcu­late it, if I still knew all the rest of my math.

What about the no­tion of math­e­mat­i­cal proof? If no one had ever told it to me, would I be able to rein­vent that on the ba­sis of other be­liefs I pos­sess? There was a time when hu­man­ity did not have such a con­cept. Some­one must have in­vented it. What was it that they no­ticed? Would I no­tice if I saw some­thing equally novel and equally im­por­tant? Would I be able to think that far out­side the box?

How much of your knowl­edge could you re­gen­er­ate? From how deep a dele­tion? It’s not just a test to cast out in­suffi­ciently con­nected be­liefs. It’s a way of ab­sorb­ing a foun­tain of knowl­edge, not just one fact.

A shep­herd builds a count­ing sys­tem that works by throw­ing a peb­ble into a bucket when­ever a sheep leaves the fold, and tak­ing a peb­ble out when­ever a sheep re­turns. If you, the ap­pren­tice, do not un­der­stand this sys­tem—if it is magic that works for no ap­par­ent rea­son—then you will not know what to do if you ac­ci­den­tally drop an ex­tra peb­ble into the bucket. That which you can­not make your­self, you can­not re­make when the situ­a­tion calls for it. You can­not go back to the source, tweak one of the pa­ram­e­ter set­tings, and re­gen­er­ate the out­put, with­out the source. If “two plus four equals six” is a brute fact unto you, and then one of the el­e­ments changes to “five,” how are you to know that “two plus five equals seven” when you were sim­ply told that “two plus four equals six”?

If you see a small plant that drops a seed when­ever a bird passes it, it will not oc­cur to you that you can use this plant to par­tially au­to­mate the sheep-counter. Though you learned some­thing that the origi­nal maker would use to im­prove on their in­ven­tion, you can’t go back to the source and re-cre­ate it.

When you con­tain the source of a thought, that thought can change along with you as you ac­quire new knowl­edge and new skills. When you con­tain the source of a thought, it be­comes truly a part of you and grows along with you.

Strive to make your­self the source of ev­ery thought worth think­ing. If the thought origi­nally came from out­side, make sure it comes from in­side as well. Con­tinu­ally ask your­self: “How would I re­gen­er­ate the thought if it were deleted?” When you have an an­swer, imag­ine that knowl­edge be­ing deleted as well. And when you find a foun­tain, see what else it can pour.


Not true, by the way.


Richard Rorty, “Out of the Ma­trix: How the Late Philoso­pher Don­ald David­son Showed That Real­ity Can’t Be an Illu­sion,” The Bos­ton Globe , 2003,http://​​​​news/​​globe/​​ideas/​​ar­ti­cles/​​2003/​​10/​​05/​​out_ of_ the_ ma­trix/​​ .