Zombies! Zombies?

Your “zom­bie”, in the philo­soph­i­cal us­age of the term, is pu­ta­tively a be­ing that is ex­actly like you in ev­ery re­spect—iden­ti­cal be­hav­ior, iden­ti­cal speech, iden­ti­cal brain; ev­ery atom and quark in ex­actly the same po­si­tion, mov­ing ac­cord­ing to the same causal laws of mo­tion—ex­cept that your zom­bie is not con­scious.

It is fur­ther­more claimed that if zom­bies are “pos­si­ble” (a term over which bat­tles are still be­ing fought), then, purely from our knowl­edge of this “pos­si­bil­ity”, we can de­duce a pri­ori that con­scious­ness is ex­tra-phys­i­cal, in a sense to be de­scribed be­low; the stan­dard term for this po­si­tion is “epiphe­nom­e­nal­ism”.

(For those un­fa­mil­iar with zom­bies, I em­pha­size that this is not a straw­man. See, for ex­am­ple, the SEP en­try on Zom­bies. The “pos­si­bil­ity” of zom­bies is ac­cepted by a sub­stan­tial frac­tion, pos­si­bly a ma­jor­ity, of aca­demic philoso­phers of con­scious­ness.)

I once read some­where, “You are not the one who speaks your thoughts—you are the one who hears your thoughts”. In He­brew, the word for the high­est soul, that which God breathed into Adam, is N’Shama—”the hearer”.

If you con­ceive of “con­scious­ness” as a purely pas­sive listen­ing, then the no­tion of a zom­bie ini­tially seems easy to imag­ine. It’s some­one who lacks the N’Shama, the hearer.

(Warn­ing: Long post ahead. Very long 6,600-word post in­volv­ing David Chalmers ahead. This may be taken as my demon­stra­tive coun­terex­am­ple to Richard Chap­pell’s Ar­gu­ing with Eliezer Part II, in which Richard ac­cuses me of not en­gag­ing with the com­plex ar­gu­ments of real philoso­phers. Edit De­cem­ber 2019: There now ex­ists a shorter ed­ited ver­sion of this post here)

When you open a re­friger­a­tor and find that the or­ange juice is gone, you think “Darn, I’m out of or­ange juice.” The sound of these words is prob­a­bly rep­re­sented in your au­di­tory cor­tex, as though you’d heard some­one else say it. (Why do I think this? Be­cause na­tive Chi­nese speak­ers can re­mem­ber longer digit se­quences than English-speak­ers. Chi­nese digits are all sin­gle syl­la­bles, and so Chi­nese speak­ers can re­mem­ber around ten digits, ver­sus the fa­mous “seven plus or minus two” for English speak­ers. There ap­pears to be a loop of re­peat­ing sounds back to your­self, a size limit on work­ing mem­ory in the au­di­tory cor­tex, which is gen­uinely phoneme-based.)

Let’s sup­pose the above is cor­rect; as a pos­tu­late, it should cer­tainly pre­sent no prob­lem for ad­vo­cates of zom­bies. Even if hu­mans are not like this, it seems easy enough to imag­ine an AI con­structed this way (and imag­in­abil­ity is what the zom­bie ar­gu­ment is all about). It’s not only con­ceiv­able in prin­ci­ple, but quite pos­si­ble in the next cou­ple of decades, that sur­geons will lay a net­work of neu­ral taps over some­one’s au­di­tory cor­tex and read out their in­ter­nal nar­ra­tive. (Re­searchers have already tapped the lat­eral genicu­late nu­cleus of a cat and re­con­structed rec­og­niz­able vi­sual in­puts.)

So your zom­bie, be­ing phys­i­cally iden­ti­cal to you down to the last atom, will open the re­friger­a­tor and form au­di­tory cor­ti­cal pat­terns for the phonemes “Darn, I’m out of or­ange juice”. On this point, epiphe­no­ma­l­ists would will­ingly agree.

But, says the epiphe­nom­e­nal­ist, in the zom­bie there is no one in­side to hear; the in­ner listener is miss­ing. The in­ter­nal nar­ra­tive is spo­ken, but un­heard. You are not the one who speaks your thoughts, you are the one who hears them.

It seems a lot more straight­for­ward (they would say) to make an AI that prints out some kind of in­ter­nal nar­ra­tive, than to show that an in­ner listener hears it.

The Zom­bie Ar­gu­ment is that if the Zom­bie World is pos­si­ble—not nec­es­sar­ily phys­i­cally pos­si­ble in our uni­verse, just “pos­si­ble in the­ory”, or “imag­in­able”, or some­thing along those lines—then con­scious­ness must be ex­tra-phys­i­cal, some­thing over and above mere atoms. Why? Be­cause even if you some­how knew the po­si­tions of all the atoms in the uni­verse, you would still have be told, as a sep­a­rate and ad­di­tional fact, that peo­ple were con­scious—that they had in­ner listen­ers—that we were not in the Zom­bie World, as seems pos­si­ble.

Zom­bie-ism is not the same as du­al­ism. Descartes thought there was a body-sub­stance and a wholly differ­ent kind of mind-sub­stance, but Descartes also thought that the mind-sub­stance was a causally ac­tive prin­ci­ple, in­ter­act­ing with the body-sub­stance, con­trol­ling our speech and be­hav­ior. Sub­tract­ing out the mind-sub­stance from the hu­man would leave a tra­di­tional zom­bie, of the lurch­ing and groan­ing sort.

And though the He­brew word for the in­ner­most soul is N’Shama, that-which-hears, I can’t re­call hear­ing a rabbi ar­gu­ing for the pos­si­bil­ity of zom­bies. Most rab­bis would prob­a­bly be aghast at the idea that the di­v­ine part which God breathed into Adam doesn’t ac­tu­ally do any­thing.

The tech­ni­cal term for the be­lief that con­scious­ness is there, but has no effect on the phys­i­cal world, is epiphe­nom­e­nal­ism.

Though there are other el­e­ments to the zom­bie ar­gu­ment (I’ll deal with them be­low), I think that the in­tu­ition of the pas­sive listener is what first se­duces peo­ple to zom­bie-ism. In par­tic­u­lar, it’s what se­duces a lay au­di­ence to zom­bie-ism. The core no­tion is sim­ple and easy to ac­cess: The lights are on but no one’s home.

Philoso­phers are ap­peal­ing to the in­tu­ition of the pas­sive listener when they say “Of course the zom­bie world is imag­in­able; you know ex­actly what it would be like.”

One of the great bat­tles in the Zom­bie Wars is over what, ex­actly, is meant by say­ing that zom­bies are “pos­si­ble”. Early zom­bie-ist philoso­phers (the 1970s) just thought it was ob­vi­ous that zom­bies were “pos­si­ble”, and didn’t bother to define what sort of pos­si­bil­ity was meant.

Be­cause of my read­ing in math­e­mat­i­cal logic, what in­stantly comes into my mind is log­i­cal pos­si­bil­ity. If you have a col­lec­tion of state­ments like (A->B),(B->C),(C->~A) then the com­pound be­lief is log­i­cally pos­si­ble if it has a model—which, in the sim­ple case above, re­duces to find­ing a value as­sign­ment to A, B, C that makes all of the state­ments (A->B),(B->C), and (C->~A) true. In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.

Some­thing will seem pos­si­ble—will seem “con­cep­tu­ally pos­si­ble” or “imag­in­able”—if you can con­sider the col­lec­tion of state­ments with­out see­ing a con­tra­dic­tion. But it is, in gen­eral, a very hard prob­lem to see con­tra­dic­tions or to find a full spe­cific model! If you limit your­self to sim­ple Boolean propo­si­tions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) …), con­junc­tions of dis­junc­tions of three vari­ables, then this is a very fa­mous prob­lem called 3-SAT, which is one of the first prob­lems ever to be proven NP-com­plete.

So just be­cause you don’t see a con­tra­dic­tion in the Zom­bie World at first glance, it doesn’t mean that no con­tra­dic­tion is there. It’s like not see­ing a con­tra­dic­tion in the Rie­mann Hy­poth­e­sis at first glance. From con­cep­tual pos­si­bil­ity (“I don’t see a prob­lem”) to log­i­cal pos­si­bil­ity in the full tech­ni­cal sense, is a very great leap. It’s easy to make it an NP-com­plete leap, and with first-or­der the­o­ries you can make it ar­bi­trar­ily hard to com­pute even for finite ques­tions. And it’s log­i­cal pos­si­bil­ity of the Zom­bie World, not con­cep­tual pos­si­bil­ity, that is needed to sup­pose that a log­i­cally om­ni­scient mind could know the po­si­tions of all the atoms in the uni­verse, and yet need to be told as an ad­di­tional non-en­tailed fact that we have in­ner listen­ers.

Just be­cause you don’t see a con­tra­dic­tion yet, is no guaran­tee that you won’t see a con­tra­dic­tion in an­other 30 sec­onds. “All odd num­bers are prime. Proof: 3 is prime, 5 is prime, 7 is prime...”

So let us pon­der the Zom­bie Ar­gu­ment a lit­tle longer: Can we think of a coun­terex­am­ple to the as­ser­tion “Con­scious­ness has no third-party-de­tectable causal im­pact on the world”?

If you close your eyes and con­cen­trate on your in­ward aware­ness, you will be­gin to form thoughts, in your in­ter­nal nar­ra­tive, that go along the lines of “I am aware” and “My aware­ness is sep­a­rate from my thoughts” and “I am not the one who speaks my thoughts, but the one who hears them” and “My stream of con­scious­ness is not my con­scious­ness” and “It seems like there is a part of me which I can imag­ine be­ing elimi­nated with­out chang­ing my out­ward be­hav­ior.”

You can even say these sen­tences out loud, as you med­i­tate. In prin­ci­ple, some­one with a su­per-fMRI could prob­a­bly read the phonemes out of your au­di­tory cor­tex; but say­ing it out loud re­moves all doubt about whether you have en­tered the realms of testa­bil­ity and phys­i­cal con­se­quences.

This cer­tainly seems like the in­ner listener is be­ing caught in the act of listen­ing by what­ever part of you writes the in­ter­nal nar­ra­tive and flaps your tongue.

Imag­ine that a mys­te­ri­ous race of aliens visit you, and leave you a mys­te­ri­ous black box as a gift. You try pok­ing and prod­ding the black box, but (as far as you can tell) you never suc­ceed in elic­it­ing a re­ac­tion. You can’t make the black box pro­duce gold coins or an­swer ques­tions. So you con­clude that the black box is causally in­ac­tive: “For all X, the black box doesn’t do X.” The black box is an effect, but not a cause; epiphe­nom­e­nal; with­out causal po­tency. In your mind, you test this gen­eral hy­poth­e­sis to see if it is true in some trial cases, and it seems to be true—”Does the black box turn lead to gold? No. Does the black box boil wa­ter? No.”

But you can see the black box; it ab­sorbs light, and weighs heavy in your hand. This, too, is part of the dance of causal­ity. If the black box were wholly out­side the causal uni­verse, you couldn’t see it; you would have no way to know it ex­isted; you could not say, “Thanks for the black box.” You didn’t think of this coun­terex­am­ple, when you for­mu­lated the gen­eral rule: “All X: Black box doesn’t do X”. But it was there all along.

(Ac­tu­ally, the aliens left you an­other black box, this one purely epiphe­nom­e­nal, and you haven’t the slight­est clue that it’s there in your liv­ing room. That was their joke.)

If you can close your eyes, and sense your­self sens­ing—if you can be aware of your­self be­ing aware, and think “I am aware that I am aware”—and say out loud, “I am aware that I am aware”—then your con­scious­ness is not with­out effect on your in­ter­nal nar­ra­tive, or your mov­ing lips. You can see your­self see­ing, and your in­ter­nal nar­ra­tive re­flects this, and so do your lips if you choose to say it out loud.

I have not seen the above ar­gu­ment writ­ten out that par­tic­u­lar way—”the listener caught in the act of listen­ing”—though it may well have been said be­fore.

But it is a stan­dard point—which zom­bie-ist philoso­phers ac­cept!—that the Zom­bie World’s philoso­phers, be­ing atom-by-atom iden­ti­cal to our own philoso­phers, write iden­ti­cal pa­pers about the philos­o­phy of con­scious­ness.

At this point, the Zom­bie World stops be­ing an in­tu­itive con­se­quence of the idea of a pas­sive listener.

Philoso­phers writ­ing pa­pers about con­scious­ness would seem to be at least one effect of con­scious­ness upon the world. You can ar­gue clever rea­sons why this is not so, but you have to be clever.

You would in­tu­itively sup­pose that if your in­ward aware­ness went away, this would change the world, in that your in­ter­nal nar­ra­tive would no longer say things like “There is a mys­te­ri­ous listener within me,” be­cause the mys­te­ri­ous listener would be gone. It is usu­ally right af­ter you fo­cus your aware­ness on your aware­ness, that your in­ter­nal nar­ra­tive says “I am aware of my aware­ness”, which sug­gests that if the first event never hap­pened again, nei­ther would the sec­ond. You can ar­gue clever rea­sons why this is not so, but you have to be clever.

You can form a propo­si­tional be­lief that “Con­scious­ness is with­out effect”, and not see any con­tra­dic­tion at first, if you don’t re­al­ize that talk­ing about con­scious­ness is an effect of be­ing con­scious. But once you see the con­nec­tion from the gen­eral rule that con­scious­ness has no effect, to the spe­cific im­pli­ca­tion that con­scious­ness has no effect on how philoso­phers write pa­pers about con­scious­ness, zom­bie-ism stops be­ing in­tu­itive and starts re­quiring you to pos­tu­late strange things.

One strange thing you might pos­tu­late is that there’s a Zom­bie Master, a god within the Zom­bie World who sur­rep­ti­tiously takes con­trol of zom­bie philoso­phers and makes them talk and write about con­scious­ness.

A Zom­bie Master doesn’t seem im­pos­si­ble. Hu­man be­ings of­ten don’t sound all that co­her­ent when talk­ing about con­scious­ness. It might not be that hard to fake their dis­course, to the stan­dards of, say, a hu­man am­a­teur talk­ing in a bar. Maybe you could take, as a cor­pus, one thou­sand hu­man am­a­teurs try­ing to dis­cuss con­scious­ness; feed them into a non-con­scious but so­phis­ti­cated AI, bet­ter than to­day’s mod­els but not self-mod­ify­ing; and get back dis­course about “con­scious­ness” that sounded as sen­si­ble as most hu­mans, which is to say, not very.

But this speech about “con­scious­ness” would not be spon­ta­neous. It would not be pro­duced within the AI. It would be a recorded imi­ta­tion of some­one else talk­ing. That is just a holodeck, with a cen­tral AI writ­ing the speech of the non-player char­ac­ters. This is not what the Zom­bie World is about.

By sup­po­si­tion, the Zom­bie World is atom-by-atom iden­ti­cal to our own, ex­cept that the in­hab­itants lack con­scious­ness. Fur­ther­more, the atoms in the Zom­bie World move un­der the same laws of physics as in our own world. If there are “bridg­ing laws” that gov­ern which con­figu­ra­tions of atoms evoke con­scious­ness, those bridg­ing laws are ab­sent. But, by hy­poth­e­sis, the differ­ence is not ex­per­i­men­tally de­tectable. When it comes to say­ing whether a quark zigs or zags or ex­erts a force on nearby quarks—any­thing ex­per­i­men­tally mea­surable—the same phys­i­cal laws gov­ern.

The Zom­bie World has no room for a Zom­bie Master, be­cause a Zom­bie Master has to con­trol the zom­bie’s lips, and that con­trol is, in prin­ci­ple, ex­per­i­men­tally de­tectable. The Zom­bie Master moves lips, there­fore it has ob­serv­able con­se­quences. There would be a point where an elec­tron zags, in­stead of zig­ging, be­cause the Zom­bie Master says so. (Un­less the Zom­bie Master is ac­tu­ally in the world, as a pat­tern of quarks—but then the Zom­bie World is not atom-by-atom iden­ti­cal to our own, un­less you think this world also con­tains a Zom­bie Master.)

When a philoso­pher in our world types, “I think the Zom­bie World is pos­si­ble”, his fingers strike keys in se­quence: Z-O-M-B-I-E. There is a chain of causal­ity that can be traced back from these keystrokes: mus­cles con­tract­ing, nerves firing, com­mands sent down through the spinal cord, from the mo­tor cor­tex—and then into less un­der­stood ar­eas of the brain, where the philoso­pher’s in­ter­nal nar­ra­tive first be­gan talk­ing about “con­scious­ness”.

And the philoso­pher’s zom­bie twin strikes the same keys, for the same rea­son, causally speak­ing. There is no cause within the chain of ex­pla­na­tion for why the philoso­pher writes the way he does, which is not also pre­sent in the zom­bie twin. The zom­bie twin also has an in­ter­nal nar­ra­tive about “con­scious­ness”, that a su­per-fMRI could read out of the au­di­tory cor­tex. And what­ever other thoughts, or other causes of any kind, led to that in­ter­nal nar­ra­tive, they are ex­actly the same in our own uni­verse and in the Zom­bie World.

So you can’t say that the philoso­pher is writ­ing about con­scious­ness be­cause of con­scious­ness, while the zom­bie twin is writ­ing about con­scious­ness be­cause of a Zom­bie Master or AI chat­bot. When you trace back the chain of causal­ity be­hind the key­board, to the in­ter­nal nar­ra­tive echoed in the au­di­tory cor­tex, to the cause of the nar­ra­tive, you must find the same phys­i­cal ex­pla­na­tion in our world as in the zom­bie world.

As the most formidable ad­vo­cate of zom­bie-ism, David Chalmers, writes:

Think of my zom­bie twin in the uni­verse next door. He talks about con­scious ex­pe­rience all the time—in fact, he seems ob­sessed by it. He spends ridicu­lous amounts of time hunched over a com­puter, writ­ing chap­ter af­ter chap­ter on the mys­ter­ies of con­scious­ness. He of­ten com­ments on the plea­sure he gets from cer­tain sen­sory qualia, pro­fess­ing a par­tic­u­lar love for deep greens and pur­ples. He fre­quently gets into ar­gu­ments with zom­bie ma­te­ri­al­ists, ar­gu­ing that their po­si­tion can­not do jus­tice to the re­al­ities of con­scious ex­pe­rience.

And yet he has no con­scious ex­pe­rience at all! In his uni­verse, the ma­te­ri­al­ists are right and he is wrong. Most of his claims about con­scious ex­pe­rience are ut­terly false. But there is cer­tainly a phys­i­cal or func­tional ex­pla­na­tion of why he makes the claims he makes. After all, his uni­verse is fully law-gov­erned, and no events therein are mirac­u­lous, so there must be some ex­pla­na­tion of his claims.

...Any ex­pla­na­tion of my twin’s be­hav­ior will equally count as an ex­pla­na­tion of my be­hav­ior, as the pro­cesses in­side his body are pre­cisely mir­rored by those in­side mine. The ex­pla­na­tion of his claims ob­vi­ously does not de­pend on the ex­is­tence of con­scious­ness, as there is no con­scious­ness in his world. It fol­lows that the ex­pla­na­tion of my claims is also in­de­pen­dent of the ex­is­tence of con­scious­ness.

Chalmers is not ar­gu­ing against zom­bies; those are his ac­tual be­liefs!

This para­dox­i­cal situ­a­tion is at once delight­ful and dis­turb­ing. It is not ob­vi­ously fatal to the nonre­duc­tive po­si­tion, but it is at least some­thing that we need to come to grip­s

I would se­ri­ously nom­i­nate this as the largest bul­let ever bit­ten in the his­tory of time. And that is a back­handed com­pli­ment to David Chalmers: A lesser mor­tal would sim­ply fail to see the im­pli­ca­tions, or re­fuse to face them, or ra­tio­nal­ize a rea­son it wasn’t so.

Why would any­one bite a bul­let that large? Why would any­one pos­tu­late un­con­scious zom­bies who write pa­pers about con­scious­ness for ex­actly the same rea­son that our own gen­uinely con­scious philoso­phers do?

Not be­cause of the first in­tu­ition I wrote about, the in­tu­ition of the pas­sive listener. That in­tu­ition may say that zom­bies can drive cars or do math or even fall in love, but it doesn’t say that zom­bies write philos­o­phy pa­pers about their pas­sive listen­ers.

The zom­bie ar­gu­ment does not rest solely on the in­tu­ition of the pas­sive listener. If this was all there was to the zom­bie ar­gu­ment, it would be dead by now, I think. The in­tu­ition that the “listener” can be elimi­nated with­out effect, would go away as soon as you re­al­ized that your in­ter­nal nar­ra­tive rou­tinely seems to catch the listener in the act of listen­ing.

No, the drive to bite this bul­let comes from an en­tirely differ­ent in­tu­ition—the in­tu­ition that no mat­ter how many atoms you add up, no mat­ter how many masses and elec­tri­cal charges in­ter­act with each other, they will never nec­es­sar­ily pro­duce a sub­jec­tive sen­sa­tion of the mys­te­ri­ous red­ness of red. It may be a fact about our phys­i­cal uni­verse (Chalmers says) that putting such-and-such atoms into such-and-such a po­si­tion, evokes a sen­sa­tion of red­ness; but if so, it is not a nec­es­sary fact, it is some­thing to be ex­plained above and be­yond the mo­tion of the atoms.

But if you con­sider the sec­ond in­tu­ition on its own, with­out the in­tu­ition of the pas­sive listener, it is hard to see why it im­plies zom­bie-ism. Maybe there’s just a differ­ent kind of stuff, apart from and ad­di­tional to atoms, that is not causally pas­sive—a soul that ac­tu­ally does stuff, a soul that plays a real causal role in why we write about “the mys­te­ri­ous red­ness of red”. Take out the soul, and… well, as­sum­ing you just don’t fall over in a coma, you cer­tainly won’t write any more pa­pers about con­scious­ness!

This is the po­si­tion taken by Descartes and most other an­cient thinkers: The soul is of a differ­ent kind, but it in­ter­acts with the body. Descartes’s po­si­tion is tech­ni­cally known as sub­stance du­al­ism—there is a thought-stuff, a mind-stuff, and it is not like atoms; but it is causally po­tent, in­ter­ac­tive, and leaves a visi­ble mark on our uni­verse.

Zom­bie-ists are prop­erty du­al­ists—they don’t be­lieve in a sep­a­rate soul; they be­lieve that mat­ter in our uni­verse has ad­di­tional prop­er­ties be­yond the phys­i­cal.

“Beyond the phys­i­cal”? What does that mean? It means the ex­tra prop­er­ties are there, but they don’t in­fluence the mo­tion of the atoms, like the prop­er­ties of elec­tri­cal charge or mass. The ex­tra prop­er­ties are not ex­per­i­men­tally de­tectable by third par­ties; you know you are con­scious, from the in­side of your ex­tra prop­er­ties, but no sci­en­tist can ever di­rectly de­tect this from out­side.

So the ad­di­tional prop­er­ties are there, but not causally ac­tive. The ex­tra prop­er­ties do not move atoms around, which is why they can’t be de­tected by third par­ties.

And that’s why we can (allegedly) imag­ine a uni­verse just like this one, with all the atoms in the same places, but the ex­tra prop­er­ties miss­ing, so that ev­ery­thing goes on the same as be­fore, but no one is con­scious.

The Zom­bie World may not be phys­i­cally pos­si­ble, say the zom­bie-ists—be­cause it is a fact that all the mat­ter in our uni­verse has the ex­tra prop­er­ties, or obeys the bridg­ing laws that evoke con­scious­ness—but the Zom­bie World is log­i­cally pos­si­ble: the bridg­ing laws could have been differ­ent.

But, once you re­al­ize that con­ceiv­abil­ity is not the same as log­i­cal pos­si­bil­ity, and that the Zom­bie World isn’t even all that in­tu­itive, why say that the Zom­bie World is log­i­cally pos­si­ble?

Why, oh why, say that the ex­tra prop­er­ties are epiphe­nom­e­nal and in­de­tectable?

We can put this dilemma very sharply: Chalmers be­lieves that there is some­thing called con­scious­ness, and this con­scious­ness em­bod­ies the true and in­de­scrib­able sub­stance of the mys­te­ri­ous red­ness of red. It may be a prop­erty be­yond mass and charge, but it’s there, and it is con­scious­ness. Now, hav­ing said the above, Chalmers fur­ther­more speci­fies that this true stuff of con­scious­ness is epiphe­nom­e­nal, with­out causal po­tency—but why say that?

Why say that you could sub­tract this true stuff of con­scious­ness, and leave all the atoms in the same place do­ing the same things? If that’s true, we need some sep­a­rate phys­i­cal ex­pla­na­tion for why Chalmers talks about “the mys­te­ri­ous red­ness of red”. That is, there ex­ists both a mys­te­ri­ous red­ness of red, which is ex­tra-phys­i­cal, and an en­tirely sep­a­rate rea­son, within physics, why Chalmers talks about the “mys­te­ri­ous red­ness of red”.

Chalmers does con­fess that these two things seem like they ought to be re­lated, but re­ally, why do you need both? Why not just pick one or the other?

Once you’ve pos­tu­lated that there is a mys­te­ri­ous red­ness of red, why not just say that it in­ter­acts with your in­ter­nal nar­ra­tive and makes you talk about the “mys­te­ri­ous red­ness of red”?

Isn’t Descartes tak­ing the sim­pler ap­proach, here? The strictly sim­pler ap­proach?

Why pos­tu­late an ex­tra­ma­te­rial soul, and then pos­tu­late that the soul has no effect on the phys­i­cal world, and then pos­tu­late a mys­te­ri­ous un­known ma­te­rial pro­cess that causes your in­ter­nal nar­ra­tive to talk about con­scious ex­pe­rience?

Why not pos­tu­late the true stuff of con­scious­ness which no amount of mere me­chan­i­cal atoms can add up to, and then, hav­ing gone that far already, let this true stuff of con­scious­ness have causal effects like mak­ing philoso­phers talk about con­scious­ness?

I am not en­dors­ing Descartes’s view. But at least I can un­der­stand where Descartes is com­ing from. Con­scious­ness seems mys­te­ri­ous, so you pos­tu­late a mys­te­ri­ous stuff of con­scious­ness. Fine.

But now the zom­bie-ists pos­tu­late that this mys­te­ri­ous stuff doesn’t do any­thing, so you need a whole new ex­pla­na­tion for why you say you’re con­scious.

That isn’t vi­tal­ism. That’s some­thing so bizarre that vi­tal­ists would spit out their coffee. “When fires burn, they re­lease phlo­gis­ton. But phlo­gis­ton doesn’t have any ex­per­i­men­tally de­tectable im­pact on our uni­verse, so you’ll have to go look­ing for a sep­a­rate ex­pla­na­tion of why a fire can melt snow.” What?

Are prop­erty du­al­ists un­der the im­pres­sion that if they pos­tu­late a new ac­tive force, some­thing that has a causal im­pact on ob­serv­ables, they will be stick­ing their necks out too far?

Me, I’d say that if you pos­tu­late a mys­te­ri­ous, sep­a­rate, ad­di­tional, in­her­ently men­tal prop­erty of con­scious­ness, above and be­yond po­si­tions and ve­loc­i­ties, then, at that point, you have already stuck your neck out as far as it can go. To pos­tu­late this stuff of con­scious­ness, and then fur­ther pos­tu­late that it doesn’t do any­thing—for the love of cute kit­tens, why?

There isn’t even an ob­vi­ous ca­reer mo­tive. “Hi, I’m a philoso­pher of con­scious­ness. My sub­ject mat­ter is the most im­por­tant thing in the uni­verse and I should get lots of fund­ing? Well, it’s nice of you to say so, but ac­tu­ally the phe­nomenon I study doesn’t do any­thing what­so­ever.” (Ar­gu­ment from ca­reer im­pact is not valid, but I say it to leave a line of re­treat.)

Chalmers cri­tiques sub­stance du­al­ism on the grounds that it’s hard to see what new the­ory of physics, what new sub­stance that in­ter­acts with mat­ter, could pos­si­bly ex­plain con­scious­ness. But prop­erty du­al­ism has ex­actly the same prob­lem. No mat­ter what kind of dual prop­erty you talk about, how ex­actly does it ex­plain con­scious­ness?

When Chalmers pos­tu­lated an ex­tra prop­erty that is con­scious­ness, he took that leap across the un­ex­plain­able. How does it help his the­ory to fur­ther spec­ify that this ex­tra prop­erty has no effect? Why not just let it be causal?

If I were go­ing to be un­kind, this would be the time to drag in the dragon—to men­tion Carl Sa­gan’s parable of the dragon in the garage. “I have a dragon in my garage.” Great! I want to see it, let’s go! “You can’t see it—it’s an in­visi­ble dragon.” Oh, I’d like to hear it then. “Sorry, it’s an inaudible dragon.” I’d like to mea­sure its car­bon diox­ide out­put. “It doesn’t breathe.” I’ll toss a bag of flour into the air, to out­line its form. “The dragon is per­me­able to flour.”

One mo­tive for try­ing to make your the­ory un­falsifi­able, is that deep down you fear to put it to the test. Sir Roger Pen­rose (physi­cist) and Stu­art Hameroff (neu­rol­o­gist) are sub­stance du­al­ists; they think that there is some­thing mys­te­ri­ous go­ing on in quan­tum, that Everett is wrong and that the “col­lapse of the wave-func­tion” is phys­i­cally real, and that this is where con­scious­ness lives and how it ex­erts causal effect upon your lips when you say aloud “I think there­fore I am.” Believ­ing this, they pre­dicted that neu­rons would pro­tect them­selves from de­co­her­ence long enough to main­tain macro­scopic quan­tum states.

This is in the pro­cess of be­ing tested, and so far, prospects are not look­ing good for Pen­rose—

—but Pen­rose’s ba­sic con­duct is sci­en­tifi­cally re­spectable. Not Bayesian, maybe, but still fun­da­men­tally healthy. He came up with a wacky hy­poth­e­sis. He said how to test it. He went out and tried to ac­tu­ally test it.

As I once said to Stu­art Hameroff, “I think the hy­poth­e­sis you’re test­ing is com­pletely hope­less, and your ex­per­i­ments should definitely be funded. Even if you don’t find ex­actly what you’re look­ing for, you’re look­ing in a place where no one else is look­ing, and you might find some­thing in­ter­est­ing.”

So a nasty dis­mis­sal of epiphe­nom­e­nal­ism would be that zom­bie-ists are afraid to say the con­scious­ness-stuff can have effects, be­cause then sci­en­tists could go look­ing for the ex­tra prop­er­ties, and fail to find them.

I don’t think this is ac­tu­ally true of Chalmers, though. If Chalmers lacked self-hon­esty, he could make things a lot eas­ier on him­self.

(But just in case Chalmers is read­ing this and does have falsifi­ca­tion-fear, I’ll point out that if epiphe­nom­e­nal­ism is false, then there is some other ex­pla­na­tion for that-which-we-call con­scious­ness, and it will even­tu­ally be found, leav­ing Chalmers’s the­ory in ru­ins; so if Chalmers cares about his place in his­tory, he has no mo­tive to en­dorse epiphe­nom­e­nal­ism un­less he re­ally thinks it’s true.)

Chalmers is one of the most frus­trat­ing philoso­phers I know. Some­times I won­der if he’s pul­ling an “Athe­ism Con­quered”. Chalmers does this re­ally sharp anal­y­sis… and then turns left at the last minute. He lays out ev­ery­thing that’s wrong with the Zom­bie World sce­nario, and then, hav­ing re­duced the whole ar­gu­ment to smithereens, calmly ac­cepts it.

Chalmers does the same thing when he lays out, in calm de­tail, the prob­lem with say­ing that our own be­liefs in con­scious­ness are jus­tified, when our zom­bie twins say ex­actly the same thing for ex­actly the same rea­sons and are wrong.

On Chalmers’s the­ory, Chalmers say­ing that he be­lieves in con­scious­ness can­not be causally jus­tified; the be­lief is not caused by the fact it­self. In the ab­sence of con­scious­ness, Chalmers would write the same pa­pers for the same rea­sons.

On epiphe­nom­e­nal­ism, Chalmers say­ing that he be­lieves in con­scious­ness can­not be jus­tified as the product of a pro­cess that sys­tem­at­i­cally out­puts true be­liefs, be­cause the zom­bie twin writes the same pa­pers us­ing the same sys­tem­atic pro­cess and is wrong.

Chalmers ad­mits this. Chalmers, in fact, ex­plains the ar­gu­ment in great de­tail in his book. Okay, so Chalmers has solidly proven that he is not jus­tified in be­liev­ing in epiphe­nom­e­nal con­scious­ness, right? No. Chalmers writes:

Con­scious ex­pe­rience lies at the cen­ter of our epistemic uni­verse; we have ac­cess to it di­rectly. This raises the ques­tion: what is it that jus­tifies our be­liefs about our ex­pe­riences, if it is not a causal link to those ex­pe­riences, and if it is not the mechanisms by which the be­liefs are formed? I think the an­swer to this is clear: it is hav­ing the ex­pe­riences that jus­tifies the be­liefs. For ex­am­ple, the very fact that I have a red ex­pe­rience now pro­vides jus­tifi­ca­tion for my be­lief that I am hav­ing a red ex­pe­rience...

Be­cause my zom­bie twin lacks ex­pe­riences, he is in a very differ­ent epistemic situ­a­tion from me, and his judg­ments lack the cor­re­spond­ing jus­tifi­ca­tion. It may be tempt­ing to ob­ject that if my be­lief lies in the phys­i­cal realm, its jus­tifi­ca­tion must lie in the phys­i­cal realm; but this is a non se­quitur. From the fact that there is no jus­tifi­ca­tion in the phys­i­cal realm, one might con­clude that the phys­i­cal por­tion of me (my brain, say) is not jus­tified in its be­lief. But the ques­tion is whether I am jus­tified in the be­lief, not whether my brain is jus­tified in the be­lief, and if prop­erty du­al­ism is cor­rect than there is more to me than my brain.

So—if I’ve got this the­sis right—there’s a core you, above and be­yond your brain, that be­lieves it is not a zom­bie, and di­rectly ex­pe­riences not be­ing a zom­bie; and so its be­liefs are jus­tified.

But Chalmers just wrote all that stuff down, in his very phys­i­cal book, and so did the zom­bie-Chalmers.

The zom­bie Chalmers can’t have writ­ten the book be­cause of the zom­bie’s core self above the brain; there must be some en­tirely differ­ent rea­son, within the laws of physics.

It fol­lows that even if there is a part of Chalmers hid­den away that is con­scious and be­lieves in con­scious­ness, di­rectly and with­out me­di­a­tion, there is also a sep­a­rable sub­space of Chalmers—a causally closed cog­ni­tive sub­sys­tem that acts en­tirely within physics—and this “outer self” is what speaks Chalmers’s in­ter­nal nar­ra­tive, and writes pa­pers on con­scious­ness.

I do not see any way to evade the charge that, on Chalmers’s own the­ory, this sep­a­rable outer Chalmers is de­ranged. This is the part of Chalmers that is the same in this world, or the Zom­bie World; and in ei­ther world it writes philos­o­phy pa­pers on con­scious­ness for no valid rea­son. Chalmers’s philos­o­phy pa­pers are not out­put by that in­ner core of aware­ness and be­lief-in-aware­ness, they are out­put by the mere physics of the in­ter­nal nar­ra­tive that makes Chalmers’s fingers strike the keys of his com­puter.

And yet this de­ranged outer Chalmers is writ­ing philos­o­phy pa­pers that just hap­pen to be perfectly right, by a sep­a­rate and ad­di­tional mir­a­cle. Not a log­i­cally nec­es­sary mir­a­cle (then the Zom­bie World would not be log­i­cally pos­si­ble). A phys­i­cally con­tin­gent mir­a­cle, that hap­pens to be true in what we think is our uni­verse, even though sci­ence can never dis­t­in­guish our uni­verse from the Zom­bie World.

Or at least, that would seem to be the im­pli­ca­tion of what the self-con­fess­edly de­ranged outer Chalmers is tel­ling us.

I think I speak for all re­duc­tion­ists when I say Huh?

That’s not epicy­cles. That’s, “Plane­tary mo­tions fol­low these epicy­cles—but epicy­cles don’t ac­tu­ally do any­thing—there’s some­thing else that makes the planets move the same way the epicy­cles say they should, which I haven’t been able to ex­plain—and by the way, I would say this even if there weren’t any epicy­cles.”

I have a non­stan­dard per­spec­tive on philos­o­phy be­cause I look at ev­ery­thing with an eye to de­sign­ing an AI; speci­fi­cally, a self-im­prov­ing Ar­tifi­cial Gen­eral In­tel­li­gence with sta­ble mo­ti­va­tional struc­ture.

When I think about de­sign­ing an AI, I pon­der prin­ci­ples like prob­a­bil­ity the­ory, the Bayesian no­tion of ev­i­dence as differ­en­tial di­ag­nos­tic, and above all, re­flec­tive co­her­ence. Any self-mod­ify­ing AI that starts out in a re­flec­tively in­con­sis­tent state won’t stay that way for long.

If a self-mod­ify­ing AI looks at a part of it­self that con­cludes “B” on con­di­tion A—a part of it­self that writes “B” to mem­ory when­ever con­di­tion A is true—and the AI in­spects this part, de­ter­mines how it (causally) op­er­ates in the con­text of the larger uni­verse, and the AI de­cides that this part sys­tem­at­i­cally tends to write false data to mem­ory, then the AI has found what ap­pears to be a bug, and the AI will self-mod­ify not to write “B” to the be­lief pool un­der con­di­tion A.

Any episte­molog­i­cal the­ory that dis­re­gards re­flec­tive co­her­ence is not a good the­ory to use in con­struct­ing self-im­prov­ing AI. This is a knock­down ar­gu­ment from my per­spec­tive, con­sid­er­ing what I in­tend to ac­tu­ally use philos­o­phy for. So I have to in­vent a re­flec­tively co­her­ent the­ory any­way. And when I do, by golly, re­flec­tive co­her­ence turns out to make in­tu­itive sense.

So that’s the un­usual way in which I tend to think about these things. And now I look back at Chalmers:

The causally closed “outer Chalmers” (that is not in­fluenced in any way by the “in­ner Chalmers” that has sep­a­rate ad­di­tional aware­ness and be­liefs) must be car­ry­ing out some sys­tem­at­i­cally un­re­li­able, un­war­ranted op­er­a­tion which in some un­ex­plained fash­ion causes the in­ter­nal nar­ra­tive to pro­duce be­liefs about an “in­ner Chalmers” that are cor­rect for no log­i­cal rea­son in what hap­pens to be our uni­verse.

But there’s no pos­si­ble war­rant for the outer Chalmers or any re­flec­tively co­her­ent self-in­spect­ing AI to be­lieve in this mys­te­ri­ous cor­rect­ness. A good AI de­sign should, I think, look like a re­flec­tively co­her­ent in­tel­li­gence em­bod­ied in a causal sys­tem, with a testable the­ory of how that self­same causal sys­tem pro­duces sys­tem­at­i­cally ac­cu­rate be­liefs on the way to achiev­ing its goals.

So the AI will scan Chalmers and see a closed causal cog­ni­tive sys­tem pro­duc­ing an in­ter­nal nar­ra­tive that is ut­ter­ing non­sense. Non­sense that seems to have a high im­pact on what Chalmers thinks should be con­sid­ered a morally valuable per­son.

This is not a nec­es­sary prob­lem for Friendly AI the­o­rists. It is only a prob­lem if you hap­pen to be an epiphe­nom­e­nal­ist. If you be­lieve ei­ther the re­duc­tion­ists (con­scious­ness hap­pens within the atoms) or the sub­stance du­al­ists (con­scious­ness is causally po­tent im­ma­te­rial stuff), peo­ple talk­ing about con­scious­ness are talk­ing about some­thing real, and a re­flec­tively con­sis­tent Bayesian AI can see this by trac­ing back the chain of causal­ity for what makes peo­ple say “con­scious­ness”.

Ac­cord­ing to Chalmers, the causally closed cog­ni­tive sys­tem of Chalmers’s in­ter­nal nar­ra­tive is (mys­te­ri­ously) malfunc­tion­ing in a way that, not by ne­ces­sity, but just in our uni­verse, mirac­u­lously hap­pens to be cor­rect. Fur­ther­more, the in­ter­nal nar­ra­tive as­serts “the in­ter­nal nar­ra­tive is mys­te­ri­ously malfunc­tion­ing, but mirac­u­lously hap­pens to be cor­rectly echo­ing the jus­tified thoughts of the epiphe­nom­e­nal in­ner core”, and again, in our uni­verse, mirac­u­lously hap­pens to be cor­rect.

Oh, come on!

Shouldn’t there come a point where you just give up on an idea? Where, on some raw in­tu­itive level, you just go: What on Earth was I think­ing?

Hu­man­ity has ac­cu­mu­lated some broad ex­pe­rience with what cor­rect the­o­ries of the world look like. This is not what a cor­rect the­ory looks like.

“Ar­gu­ment from in­cre­dulity,” you say. Fine, you want it spel­led out? The said Chalmer­sian the­ory pos­tu­lates mul­ti­ple un­ex­plained com­plex mir­a­cles. This drives down its prior prob­a­bil­ity, by the con­junc­tion rule of prob­a­bil­ity and Oc­cam’s Ra­zor. It is there­fore dom­i­nated by at least two the­o­ries which pos­tu­late fewer mir­a­cles, namely:

  • Sub­stance du­al­ism:

    • There is a stuff of con­scious­ness which is not yet un­der­stood, an ex­traor­di­nary su­per-phys­i­cal stuff that visi­bly af­fects our world; and this stuff is what makes us talk about con­scious­ness.

  • Not-quite-faith-based re­duc­tion­ism:

    • That-which-we-name “con­scious­ness” hap­pens within physics, in a way not yet un­der­stood, just like what hap­pened the last three thou­sand times hu­man­ity ran into some­thing mys­te­ri­ous.

    • Your in­tu­ition that no ma­te­rial sub­stance can pos­si­bly add up to con­scious­ness is in­cor­rect. If you ac­tu­ally knew ex­actly why you talk about con­scious­ness, this would give you new in­sights, of a form you can’t now an­ti­ci­pate; and af­ter­ward you would re­al­ize that your ar­gu­ments about nor­mal physics hav­ing no room for con­scious­ness were flawed.

Com­pare to:

  • Epiphe­nom­e­nal prop­erty du­al­ism:

    • Mat­ter has ad­di­tional con­scious­ness-prop­er­ties which are not yet un­der­stood. Th­ese prop­er­ties are epiphe­nom­e­nal with re­spect to or­di­nar­ily ob­serv­able physics—they make no differ­ence to the mo­tion of par­ti­cles.

    • Separately, there ex­ists a not-yet-un­der­stood rea­son within nor­mal physics why philoso­phers talk about con­scious­ness and in­vent the­o­ries of dual prop­er­ties.

    • Mirac­u­lously, when philoso­phers talk about con­scious­ness, the bridg­ing laws of our world are ex­actly right to make this talk about con­scious­ness cor­rect, even though it arises from a malfunc­tion (draw­ing of log­i­cally un­war­ranted con­clu­sions) in the causally closed cog­ni­tive sys­tem that types philos­o­phy pa­pers.

I know I’m speak­ing from limited ex­pe­rience, here. But based on my limited ex­pe­rience, the Zom­bie Ar­gu­ment may be a can­di­date for the most de­ranged idea in all of philos­o­phy.

There are times when, as a ra­tio­nal­ist, you have to be­lieve things that seem weird to you. Rel­a­tivity seems weird, quan­tum me­chan­ics seems weird, nat­u­ral se­lec­tion seems weird.

But these weird­nesses are pinned down by mas­sive ev­i­dence. There’s a differ­ence be­tween be­liev­ing some­thing weird be­cause sci­ence has con­firmed it over­whelm­ingly—

—ver­sus be­liev­ing a propo­si­tion that seems down­right de­ranged, be­cause of a great big com­pli­cated philo­soph­i­cal ar­gu­ment cen­tered around un­speci­fied mir­a­cles and gi­ant blank spots not even claimed to be un­der­stood—

—in a case where even if you ac­cept ev­ery­thing that has been told to you so far, af­ter­ward the phe­nomenon will still seem like a mys­tery and still have the same qual­ity of won­drous im­pen­e­tra­bil­ity that it had at the start.

The cor­rect thing for a ra­tio­nal­ist to say at this point, if all of David Chalmers’s ar­gu­ments seem in­di­vi­d­u­ally plau­si­ble—which they don’t seem to me—is:

“Okay… I don’t know how con­scious­ness works… I ad­mit that… and maybe I’m ap­proach­ing the whole prob­lem wrong, or ask­ing the wrong ques­tions… but this zom­bie busi­ness can’t pos­si­bly be right. The ar­gu­ments aren’t nailed down enough to make me be­lieve this—es­pe­cially when ac­cept­ing it won’t make me feel any less con­fused. On a core gut level, this just doesn’t look like the way re­al­ity could re­ally re­ally work.”

Mind you, I am not say­ing this is a sub­sti­tute for care­ful an­a­lytic re­fu­ta­tion of Chalmers’s the­sis. Sys­tem 1 is not a sub­sti­tute for Sys­tem 2, though it can help point the way. You still have to track down where the prob­lems are speci­fi­cally.

Chalmers wrote a big book, not all of which is available through free Google pre­view. I haven’t du­pli­cated the long chains of ar­gu­ment where Chalmers lays out the ar­gu­ments against him­self in calm de­tail. I’ve just tried to tack on a fi­nal re­fu­ta­tion of Chalmers’s last pre­sented defense, which Chalmers has not yet coun­tered to my knowl­edge. Hit the ball back into his court, as it were.

But, yes, on a core level, the sane thing to do when you see the con­clu­sion of the zom­bie ar­gu­ment, is to say “That can’t pos­si­bly be right” and start look­ing for a flaw.