GAZP vs. GLUT

In “The Un­i­mag­ined Pre­pos­ter­ous­ness of Zom­bies”, Daniel Den­nett says:

To date, sev­eral philoso­phers have told me that they plan to ac­cept my challenge to offer a non-ques­tion-beg­ging defense of zom­bies, but the only one I have seen so far in­volves pos­tu­lat­ing a “log­i­cally pos­si­ble” but fan­tas­tic be­ing — a de­scen­dent of Ned Block’s Gi­ant Lookup Table fan­tasy...

A Gi­ant Lookup Table, in pro­gram­mer’s par­lance, is when you im­ple­ment a func­tion as a gi­ant table of in­puts and out­puts, usu­ally to save on run­time com­pu­ta­tion. If my pro­gram needs to know the mul­ti­plica­tive product of two in­puts be­tween 1 and 100, I can write a mul­ti­pli­ca­tion al­gorithm that com­putes each time the func­tion is called, or I can pre­com­pute a Gi­ant Lookup Table with 10,000 en­tries and two in­dices. There are times when you do want to do this, though not for mul­ti­pli­ca­tion—times when you’re go­ing to reuse the func­tion a lot and it doesn’t have many pos­si­ble in­puts; or when clock cy­cles are cheap while you’re ini­tial­iz­ing, but very ex­pen­sive while ex­e­cut­ing.

Gi­ant Lookup Tables get very large, very fast. A GLUT of all pos­si­ble twenty-ply con­ver­sa­tions with ten words per re­mark, us­ing only 850-word Ba­sic English, would re­quire 7.6 * 10585 en­tries.

Re­plac­ing a hu­man brain with a Gi­ant Lookup Table of all pos­si­ble sense in­puts and mo­tor out­puts (rel­a­tive to some fine-grained digi­ti­za­tion scheme) would re­quire an un­rea­son­ably large amount of mem­ory stor­age. But “in prin­ci­ple”, as philoso­phers are fond of say­ing, it could be done.

The GLUT is not a zom­bie in the clas­sic sense, be­cause it is micro­phys­i­cally dis­similar to a hu­man. (In fact, a GLUT can’t re­ally run on the same physics as a hu­man; it’s too large to fit in our uni­verse. For philo­soph­i­cal pur­poses, we shall ig­nore this and sup­pose a sup­ply of un­limited mem­ory stor­age.)

But is the GLUT a zom­bie at all? That is, does it be­have ex­actly like a hu­man with­out be­ing con­scious?

The GLUT-ed body’s tongue talks about con­scious­ness. Its fingers write philos­o­phy pa­pers. In ev­ery way, so long as you don’t peer in­side the skull, the GLUT seems just like a hu­man… which cer­tainly seems like a valid ex­am­ple of a zom­bie: it be­haves just like a hu­man, but there’s no one home.

Un­less the GLUT is con­scious, in which case it wouldn’t be a valid ex­am­ple.

I can’t re­call ever see­ing any­one claim that a GLUT is con­scious. (Ad­mit­tedly my read­ing in this area is not up to pro­fes­sional grade; feel free to cor­rect me.) Even peo­ple who are ac­cused of be­ing (gasp!) func­tion­al­ists don’t claim that GLUTs can be con­scious.

GLUTs are the re­duc­tio ad ab­sur­dum to any­one who sug­gests that con­scious­ness is sim­ply an in­put-out­put pat­tern, thereby dis­pos­ing of all trou­ble­some wor­ries about what goes on in­side.

So what does the Gen­er­al­ized Anti-Zom­bie Prin­ci­ple (GAZP) say about the Gi­ant Lookup Table (GLUT)?

At first glance, it would seem that a GLUT is the very archetype of a Zom­bie Master—a dis­tinct, ad­di­tional, de­tectable, non-con­scious sys­tem that an­i­mates a zom­bie and makes it talk about con­scious­ness for differ­ent rea­sons.

In the in­te­rior of the GLUT, there’s merely a very sim­ple com­puter pro­gram that looks up in­puts and re­trieves out­puts. Even talk­ing about a “sim­ple com­puter pro­gram” is over­shoot­ing the mark, in a case like this. A GLUT is more like ROM than a CPU. We could equally well talk about a se­ries of switched tracks by which some balls roll out of a pre­vi­ously stored stack and into a trough—pe­riod; that’s all the GLUT does.

A spokesper­son from Peo­ple for the Eth­i­cal Treat­ment of Zom­bies replies: “Oh, that’s what all the anti-mechanists say, isn’t it? That when you look in the brain, you just find a bunch of neu­ro­trans­mit­ters open­ing ion chan­nels? If ion chan­nels can be con­scious, why not lev­ers and balls rol­ling into bins?”

“The prob­lem isn’t the lev­ers,” replies the func­tion­al­ist, “the prob­lem is that a GLUT has the wrong pat­tern of lev­ers. You need lev­ers that im­ple­ment things like, say, for­ma­tion of be­liefs about be­liefs, or self-mod­el­ing… Heck, you need the abil­ity to write things to mem­ory just so that time can pass for the com­pu­ta­tion. Un­less you think it’s pos­si­ble to pro­gram a con­scious be­ing in Haskell.”

“I don’t know about that,” says the PETZ spokesper­son, “all I know is that this so-called zom­bie writes philo­soph­i­cal pa­pers about con­scious­ness. Where do these philos­o­phy pa­pers come from, if not from con­scious­ness?”

Good ques­tion! Let us pon­der it deeply.

There’s a game in physics called Fol­low-The-En­ergy. Richard Feyn­man’s father played it with young Richard:

It was the kind of thing my father would have talked about: “What makes it go? Every­thing goes be­cause the sun is shin­ing.” And then we would have fun dis­cussing it:
“No, the toy goes be­cause the spring is wound up,” I would say. “How did the spring get wound up?” he would ask.
“I wound it up.”
“And how did you get mov­ing?”
“From eat­ing.”
“And food grows only be­cause the sun is shin­ing. So it’s be­cause the sun is shin­ing that all these things are mov­ing.” That would get the con­cept across that mo­tion is sim­ply the trans­for­ma­tion of the sun’s power.

When you get a lit­tle older, you learn that en­ergy is con­served, never cre­ated or de­stroyed, so the no­tion of us­ing up en­ergy doesn’t make much sense. You can never change the to­tal amount of en­ergy, so in what sense are you us­ing it?

So when physi­cists grow up, they learn to play a new game called Fol­low-The-Ne­gen­tropy—which is re­ally the same game they were play­ing all along; only the rules are math­ier, the game is more use­ful, and the prin­ci­ples are harder to wrap your mind around con­cep­tu­ally.

Ra­tion­al­ists learn a game called Fol­low-The-Im­prob­a­bil­ity, the grownup ver­sion of “How Do You Know?” The rule of the ra­tio­nal­ist’s game is that ev­ery im­prob­a­ble-seem­ing be­lief needs an equiv­a­lent amount of ev­i­dence to jus­tify it. (This game has amaz­ingly similar rules to Fol­low-The-Ne­gen­tropy.)

When­ever some­one vi­o­lates the rules of the ra­tio­nal­ist’s game, you can find a place in their ar­gu­ment where a quan­tity of im­prob­a­bil­ity ap­pears from nowhere; and this is as much a sign of a prob­lem as, oh, say, an in­ge­nious de­sign of linked wheels and gears that keeps it­self run­ning for­ever.

The one comes to you and says: “I be­lieve with firm and abid­ing faith that there’s an ob­ject in the as­ter­oid belt, one foot across and com­posed en­tirely of choco­late cake; you can’t prove that this is im­pos­si­ble.” But, un­less the one had ac­cess to some kind of ev­i­dence for this be­lief, it would be highly im­prob­a­ble for a cor­rect be­lief to form spon­ta­neously. So ei­ther the one can point to ev­i­dence, or the be­lief won’t turn out to be true. “But you can’t prove it’s im­pos­si­ble for my mind to spon­ta­neously gen­er­ate a be­lief that hap­pens to be cor­rect!” No, but that kind of spon­ta­neous gen­er­a­tion is highly im­prob­a­ble, just like, oh, say, an egg un­scram­bling it­self.

In Fol­low-The-Im­prob­a­bil­ity, it’s highly sus­pi­cious to even talk about a spe­cific hy­poth­e­sis with­out hav­ing had enough ev­i­dence to nar­row down the space of pos­si­ble hy­pothe­ses. Why aren’t you giv­ing equal air time to a decillion other equally plau­si­ble hy­pothe­ses? You need suffi­cient ev­i­dence to find the “choco­late cake in the as­ter­oid belt” hy­poth­e­sis in the hy­poth­e­sis space—oth­er­wise there’s no rea­son to give it more air time than a trillion other can­di­dates like “There’s a wooden dresser in the as­ter­oid belt” or “The Fly­ing Spaghetti Mon­ster threw up on my sneak­ers.”

In Fol­low-The-Im­prob­a­bil­ity, you are not al­lowed to pull out big com­pli­cated spe­cific hy­pothe­ses from thin air with­out already hav­ing a cor­re­spond­ing amount of ev­i­dence; be­cause it’s not re­al­is­tic to sup­pose that you could spon­ta­neously start dis­cussing the true hy­poth­e­sis by pure co­in­ci­dence.

A philoso­pher says, “This zom­bie’s skull con­tains a Gi­ant Lookup Table of all the in­puts and out­puts for some hu­man’s brain.” This is a very large im­prob­a­bil­ity. So you ask, “How did this im­prob­a­ble event oc­cur? Where did the GLUT come from?”

Now this is not stan­dard philo­soph­i­cal pro­ce­dure for thought ex­per­i­ments. In stan­dard philo­soph­i­cal pro­ce­dure, you are al­lowed to pos­tu­late things like “Sup­pose you were rid­ing a beam of light...” with­out wor­ry­ing about phys­i­cal pos­si­bil­ity, let alone mere im­prob­a­bil­ity. But in this case, the ori­gin of the GLUT mat­ters; and that’s why it’s im­por­tant to un­der­stand the mo­ti­vat­ing ques­tion, “Where did the im­prob­a­bil­ity come from?”

The ob­vi­ous an­swer is that you took a com­pu­ta­tional speci­fi­ca­tion of a hu­man brain, and used that to pre­com­pute the Gi­ant Lookup Table. (Thereby cre­at­ing un­counted googols of hu­man be­ings, some of them in ex­treme pain, the su­per­ma­jor­ity gone quite mad in a uni­verse of chaos where in­puts bear no re­la­tion to out­puts. But damn the ethics, this is for philos­o­phy.)

In this case, the GLUT is writ­ing pa­pers about con­scious­ness be­cause of a con­scious al­gorithm. The GLUT is no more a zom­bie, than a cel­l­phone is a zom­bie be­cause it can talk about con­scious­ness while be­ing just a small con­sumer elec­tronic de­vice. The cel­l­phone is just trans­mit­ting philos­o­phy speeches from who­ever hap­pens to be on the other end of the line. A GLUT gen­er­ated from an origi­nally hu­man brain-speci­fi­ca­tion is do­ing the same thing.

“All right,” says the philoso­pher, “the GLUT was gen­er­ated ran­domly, and just hap­pens to have the same in­put-out­put re­la­tions as some refer­ence hu­man.”

How, ex­actly, did you ran­domly gen­er­ate the GLUT?

“We used a true ran­dom­ness source—a quan­tum de­vice.”

But a quan­tum de­vice just im­ple­ments the Branch Both Ways in­struc­tion; when you gen­er­ate a bit from a quan­tum ran­dom­ness source, the de­ter­minis­tic re­sult is that one set of uni­verse-branches (lo­cally con­nected am­pli­tude clouds) see 1, and an­other set of uni­verses see 0. Do it 4 times, cre­ate 16 (sets of) uni­verses.

So, re­ally, this is like say­ing that you got the GLUT by writ­ing down all pos­si­ble GLUT-sized se­quences of 0s and 1s, in a re­ally damn huge bin of lookup ta­bles; and then reach­ing into the bin, and some­how pul­ling out a GLUT that hap­pened to cor­re­spond to a hu­man brain-speci­fi­ca­tion. Where did the im­prob­a­bil­ity come from?

Be­cause if this wasn’t just a co­in­ci­dence—if you had some reach-into-the-bin func­tion that pul­led out a hu­man-cor­re­spond­ing GLUT by de­sign, not just chance—then that reach-into-the-bin func­tion is prob­a­bly con­scious, and so the GLUT is again a cel­l­phone, not a zom­bie. It’s con­nected to a hu­man at two re­moves, in­stead of one, but it’s still a cel­l­phone! Nice try at con­ceal­ing the source of the im­prob­a­bil­ity there!

Now be­hold where Fol­low-The-Im­prob­a­bil­ity has taken us: where is the source of this body’s tongue talk­ing about an in­ner listener? The con­scious­ness isn’t in the lookup table. The con­scious­ness isn’t in the fac­tory that man­u­fac­tures lots of pos­si­ble lookup ta­bles. The con­scious­ness was in what­ever pointed to one par­tic­u­lar already-man­u­fac­tured lookup table, and said, “Use that one!”

You can see why I in­tro­duced the game of Fol­low-The-Im­prob­a­bil­ity. Or­di­nar­ily, when we’re talk­ing to a per­son, we tend to think that what­ever is in­side the skull, must be “where the con­scious­ness is”. It’s only by play­ing Fol­low-The-Im­prob­a­bil­ity that we can re­al­ize that the real source of the con­ver­sa­tion we’re hav­ing, is that-which-is-re­spon­si­ble-for the im­prob­a­bil­ity of the con­ver­sa­tion—how­ever dis­tant in time or space, as the Sun moves a wind-up toy.

“No, no!” says the philoso­pher. “In the thought ex­per­i­ment, they aren’t ran­domly gen­er­at­ing lots of GLUTs, and then us­ing a con­scious al­gorithm to pick out one GLUT that seems hu­man­like! I am spec­i­fy­ing that, in this thought ex­per­i­ment, they reach into the in­con­ceiv­ably vast GLUT bin, and by pure chance pull out a GLUT that is iden­ti­cal to a hu­man brain’s in­puts and out­puts! There! I’ve got you cor­nered now! You can’t play Fol­low-The-Im­prob­a­bil­ity any fur­ther!”

Oh. So your speci­fi­ca­tion is the source of the im­prob­a­bil­ity here.

When we play Fol­low-The-Im­prob­a­bil­ity again, we end up out­side the thought ex­per­i­ment, look­ing at the philoso­pher.

That which points to the one GLUT that talks about con­scious­ness, out of all the vast space of pos­si­bil­ities, is now… the con­scious per­son ask­ing us to imag­ine this whole sce­nario. And our own brains, which will fill in the blank when we imag­ine, “What will this GLUT say in re­sponse to ‘Talk about your in­ner listener’?”

The moral of this story is that when you fol­low back dis­course about “con­scious­ness”, you gen­er­ally find con­scious­ness. It’s not always right in front of you. Some­times it’s very clev­erly hid­den. But it’s there. Hence the Gen­er­al­ized Anti-Zom­bie Prin­ci­ple.

If there is a Zom­bie Master in the form of a chat­bot that pro­cesses and remixes am­a­teur hu­man dis­course about “con­scious­ness”, the hu­mans who gen­er­ated the origi­nal text cor­pus are con­scious.

If some­day you come to un­der­stand con­scious­ness, and look back, and see that there’s a pro­gram you can write which will out­put con­fused philo­soph­i­cal dis­course that sounds an awful lot like hu­mans with­out it­self be­ing con­scious—then when I ask “How did this pro­gram come to sound similar to hu­mans?” the an­swer is that you wrote it to sound similar to con­scious hu­mans, rather than choos­ing on the crite­rion of similar­ity to some­thing else. This doesn’t mean your lit­tle Zom­bie Master is con­scious—but it does mean I can find con­scious­ness some­where in the uni­verse by trac­ing back the chain of causal­ity, which means we’re not en­tirely in the Zom­bie World.

But sup­pose some­one ac­tu­ally did reach into a GLUT-bin and by gen­uinely pure chance pul­led out a GLUT that wrote philos­o­phy pa­pers?

Well, then it wouldn’t be con­scious. IMHO.

I mean, there’s got to be more to it than in­puts and out­puts.

Other­wise even a GLUT would be con­scious, right?


Oh, and for those of you won­der­ing how this sort of thing re­lates to my day job...

In this line of busi­ness you meet an awful lot of peo­ple who think that an ar­bi­trar­ily gen­er­ated pow­er­ful AI will be “moral”. They can’t agree among them­selves on why, or what they mean by the word “moral”; but they all agree that do­ing Friendly AI the­ory is un­nec­es­sary. And when you ask them how an ar­bi­trar­ily gen­er­ated AI ends up with moral out­puts, they proffer elab­o­rate ra­tio­nal­iza­tions aimed at AIs of that which they deem “moral”; and there are all sorts of prob­lems with this, but the num­ber one prob­lem is, “Are you sure the AI would fol­low the same line of thought you in­vented to ar­gue hu­man morals, when, un­like you, the AI doesn’t start out know­ing what you want it to ra­tio­nal­ize?” You could call the counter-prin­ci­ple Fol­low-The-De­ci­sion-In­for­ma­tion, or some­thing along those lines. You can ac­count for an AI that does im­prob­a­bly nice things by tel­ling me how you chose the AI’s de­sign from a huge space of pos­si­bil­ities, but oth­er­wise the im­prob­a­bil­ity is be­ing pul­led out of nowhere—though more and more heav­ily dis­guised, as ra­tio­nal­ized premises are ra­tio­nal­ized in turn.

So I’ve already done a whole se­ries of posts which I my­self gen­er­ated us­ing Fol­low-The-Im­prob­a­bil­ity. But I didn’t spell out the rules ex­plic­itly at that time, be­cause I hadn’t done the ther­mo­dy­namic posts yet...

Just thought I’d men­tion that. It’s amaz­ing how many of my Over­com­ing Bias posts would co­in­ci­den­tally turn out to in­clude ideas sur­pris­ingly rele­vant to dis­cus­sion of Friendly AI the­ory… if you be­lieve in co­in­ci­dence.