We are not living in a simulation

The aim of this post is to challenge Nick Bostrom’s simu­la­tion ar­gu­ment by at­tack­ing the premise of sub­strate-in­de­pen­dence. Quot­ing Bostrom in full, this premise is ex­plained as fol­lows:

A com­mon as­sump­tion in the philos­o­phy of mind is that of sub­strate-in­de­pen­dence. The idea is that men­tal states can su­per­vene on any of a broad class of phys­i­cal sub­strates. Pro­vided a sys­tem im­ple­ments the right sort of com­pu­ta­tional struc­tures and pro­cesses, it can be as­so­ci­ated with con­scious ex­pe­riences. It is not an es­sen­tial prop­erty of con­scious­ness that it is im­ple­mented on car­bon-based biolog­i­cal neu­ral net­works in­side a cra­nium: sili­con-based pro­ces­sors in­side a com­puter could in prin­ci­ple do the trick as well.

Ar­gu­ments for this the­sis have been given in the liter­a­ture, and al­though it is not en­tirely un­con­tro­ver­sial, we shall here take it as a given.

The ar­gu­ment we shall pre­sent does not, how­ever, de­pend on any very strong ver­sion of func­tion­al­ism or com­pu­ta­tion­al­ism. For ex­am­ple, we need not as­sume that the the­sis of sub­strate-in­de­pen­dence is nec­es­sar­ily true (ei­ther an­a­lyt­i­cally or meta­phys­i­cally) -- just that, in fact, a com­puter run­ning a suit­able pro­gram would be con­scious. More­over, we need not as­sume that in or­der to cre­ate a mind on a com­puter it would be suffi­cient to pro­gram it in such a way that it be­haves like a hu­man in all situ­a­tions, in­clud­ing pass­ing the Tur­ing test etc. We need only the weaker as­sump­tion that it would suffice for the gen­er­a­tion of sub­jec­tive ex­pe­riences that the com­pu­ta­tional pro­cesses of a hu­man brain are struc­turally repli­cated in suit­ably fine-grained de­tail, such as on the level of in­di­vi­d­ual synapses. This at­ten­u­ated ver­sion of sub­strate-in­de­pen­dence is quite widely ac­cepted.

Neu­ro­trans­mit­ters, nerve growth fac­tors, and other chem­i­cals that are smaller than a synapse clearly play a role in hu­man cog­ni­tion and learn­ing. The sub­strate-in­de­pen­dence the­sis is not that the effects of these chem­i­cals are small or ir­rele­vant, but rather that they af­fect sub­jec­tive ex­pe­rience only via their di­rect or in­di­rect in­fluence on com­pu­ta­tional ac­tivi­ties. For ex­am­ple, if there can be no differ­ence in sub­jec­tive ex­pe­rience with­out there also be­ing a differ­ence in synap­tic discharges, then the req­ui­site de­tail of simu­la­tion is at the synap­tic level (or higher).

I con­tend that this premise, in even its weak­est for­mu­la­tion, is ut­terly, un­sal­vage­ably false.

Since Bostrom never pre­cisely defines what a “simu­la­tor” is, I will ap­ply the fol­low­ing work­ing defi­ni­tion: a simu­la­tor is a phys­i­cal de­vice which as­sists a hu­man (or posthu­man) ob­server with de­riv­ing in­for­ma­tion about the states and be­hav­ior of a hy­po­thet­i­cal phys­i­cal sys­tem. A simu­la­tor is “perfect” if it can re­spond to any query about the state of any point or vol­ume of simu­lated space­time with an an­swer that is cor­rect ac­cord­ing to some for­mal math­e­mat­i­cal model of the laws of physics, with both the query and the re­sponse en­coded in a lan­guage that it is eas­ily com­pre­hen­si­ble to the simu­la­tor’s [post]hu­man op­er­a­tor. We can now for­mu­late the sub­strate in­de­pen­dence hy­poth­e­sis as fol­lows: any perfect simu­la­tor of a con­scious be­ing ex­pe­riences the same qualia as that be­ing.

Let us make a cou­ple ob­ser­va­tions about these defi­ni­tions. First: if the mo­ti­va­tion for our hy­po­thet­i­cal post-Sin­gu­lar­ity civ­i­liza­tion to simu­late our uni­verse is to study it, then any perfect simu­la­tor should provide them with ev­ery­thing nec­es­sary to­ward that end. Se­cond: the sub­strate in­de­pen­dence hy­poth­e­sis as I have defined it is much weaker than any ver­sion which Bostrom pro­poses, for any de­vice which perfectly simu­lates a hu­man must nec­es­sar­ily be able to an­swer queries about the state of the hu­man’s brain, such as what synapses are firing at what time, as well as any other struc­tural ques­tion right down to the Planck level.

Much of the ground I am about to cover has been tread in the past by John Searle. I will ex­plain later in this post where it is that I differ with him.

Let’s con­sider a “hello uni­verse” ex­am­ple of a perfect simu­la­tor. Sup­pose an es­sen­tially New­to­nian uni­verse in which mat­ter is ho­mo­ge­neous at all suffi­ciently small scales; i.e., there are ei­ther no quanta, or quanta sim­ply be­have like billiard balls. Grav­ity obeys the fa­mil­iar in­verse-square law. The only ob­jects in this uni­verse are two large spheres or­bit­ing each other. Since the two-body prob­lem has an easy closed-form solu­tion, it is hy­po­thet­i­cally straight­for­ward to pro­gram a Tur­ing ma­chine to act as a perfect simu­la­tor of this uni­verse, and fur­ther­more an or­di­nary pre­sent-day PC can be an ad­e­quate stand-in for a Tur­ing ma­chine so long as we don’t ask it to make its an­swers pre­cise to more dec­i­mal places than fit in mem­ory. It would pose no difficulty to ac­tu­ally im­ple­ment this simu­la­tor.

If you ran this simu­la­tor with Jupiter-sized spheres, it would rea­son perfectly about the grav­i­ta­tional effects of those spheres. Yet, the com­puter would not ac­tu­ally pro­duce any more grav­ity than it would while pow­ered off. You would not be sucked to­ward your CPU and have your body smeared evenly across its sur­face. In or­der for that hap­pen, the simu­la­tor would have to mimic the simu­lated sys­tem in phys­i­cal form, not merely com­pu­ta­tional rules. That is, it would have to ac­tu­ally have two enor­mous spheres in­side of it. Such a ma­chine could still be a “simu­la­tor” in the sense that I’ve defined the term — but in col­lo­quial us­age, we would stop call­ing this a simu­la­tor and in­stead call it the real thing.

This ob­ser­va­tion is an in­stance of a gen­eral prin­ci­ple that ought be very, very ob­vi­ous: rea­son­ing about a phys­i­cal phe­nomenon is not the same as caus­ing a phys­i­cal phe­nomenon. You can­not cre­ate new ter­ri­tory by sketch­ing a map of it, no mat­ter how much de­tail you in­clude in your map.

Qualia are phys­i­cal phe­nom­ena. I dearly wish that this state­ment were un­con­tro­ver­sial. How­ever, if you don’t agree with it, then you can re­ject the simu­la­tion ar­gu­ment on far sim­pler grounds: if ex­pe­rienc­ing qualia re­quires a “non­phys­i­cal” “soul” or what­not (I don’t know how to make sense out of ei­ther of those words), then there is no rea­son to sup­pose that any man-made simu­la­tor is im­bued with a soul and there­fore no rea­son to sup­pose that it would be con­scious. How­ever, pro­vided that you agree that qualia are phys­i­cal phe­nom­ena, then to sup­pose that they are any kind of ex­cep­tion to the prin­ci­ple I’ve just stated is sim­ply bizarre mag­i­cal think­ing. A simu­la­tor which rea­sons perfectly about a hu­man be­ing, even in­clud­ing cor­rectly de­ter­min­ing what qualia a hu­man would ex­pe­rience, does not nec­es­sar­ily ex­pe­rience those qualia, any more than a simu­la­tor that rea­sons perfectly about high grav­ity nec­es­sar­ily pro­duces high grav­ity.

Hence, the type of qualia that a simu­la­tor ac­tu­ally pro­duces (if any) de­pends cru­cially on the ac­tual phys­i­cal form of that simu­la­tor. A ma­chine which walks the way a hu­man walks must have the form of a hu­man leg. A ma­chine which grips the way a hu­man grips must have the form of a hu­man hand. And a ma­chine which ex­pe­riences the way a hu­man ex­pe­riences must have the form of a hu­man brain.

For an ex­am­ple of my claim, let us sup­pose like Bostrom does that a simu­la­tion which cor­rectly mod­els brain ac­tivity down to the level of in­di­vi­d­ual synap­tic discharges is suffi­cient in or­der model all the es­sen­tial fea­tures of hu­man con­scious­ness. What does that tell us about what would be re­quired in or­der to build an ar­tifi­cial hu­man? Here is one de­sign that would work: first, write a com­puter pro­gram, run­ning on (suffi­ciently fast) con­ven­tional hard­ware, which cor­rectly simu­lates synap­tic ac­tivity in a hu­man brain. Then, as­sem­ble mil­lions of tiny spark plugs, one per den­drite, into the phys­i­cal con­figu­ra­tion of a hu­man brain. Run a ca­ble from the com­puter to the spark plug ar­ray, and have the pro­gram fire the spark plugs in the same se­quence that it pre­dicts that synapses would oc­cur in a biolog­i­cal hu­man brain. As these firings oc­curred, the ar­ray would ex­pe­rience hu­man-like qualia. The same qualia would not re­sult if the simu­la­tor merely com­puted what plugs ought to fire with­out ac­tu­ally firing them.

Alter­na­tively, what if gran­u­lar­ity right down to the Planck level turned out to be nec­es­sary? In that case, the only way to build an ar­tifi­cial brain would to be to ac­tu­ally build, par­ti­cle-for-par­ti­cle, a brain — since due to speed-of-light limi­ta­tions, no other de­sign could pos­si­bly model ev­ery­thing it needed to model in real time.

I think that ac­tual req­ui­site gran­u­lar­ity is prob­a­bly some­where in be­tween. The spark plug de­sign seems too crude to work, while Planck-level cor­re­spon­dence is cer­tainly overkill, be­cause oth­er­wise, the tiniest fluc­tu­a­tion in our sur­round­ing en­vi­ron­ment, such as a .01 de­gree change in room tem­per­a­ture, would have a profound im­pact on our men­tal state.

Now, from here on is where I de­part from Searle if I have not already. Con­sider the fol­low­ing ques­tions:

  1. If a tree falls in the for­est and no­body hears it, does it make an acous­tic vibra­tion?

  2. If a tree falls in the for­est and no­body hears it, does it make an au­di­tory sen­sa­tion?

  3. If a tree falls in the for­est and no­body hears it, does it make a sound?

  4. Can the Chi­nese Room (.pdf link) pass a Tur­ing test ad­ministered in Chi­nese?

  5. Does the Chi­nese Room ex­pe­rience the same qualia that a Chi­nese-speak­ing hu­man would ex­pe­rience when re­ply­ing to a let­ter writ­ten in Chi­nese?

  6. Does the Chi­nese Room un­der­stand Chi­nese?

  7. Is the Chi­nese Room in­tel­li­gent?

  8. Does the Chi­nese Room think?

Here is the an­swer key:

  1. Yes.

  2. No.

  3. What do you mean?

  4. Yes.

  5. No.

  6. What do you mean?

  7. What do you mean?

  8. What do you mean?

    The prob­lem with Searle is his lack of any clear an­swer to “What do you mean?”. Most tech­ni­cally-minded peo­ple, my­self in­cluded, think of 6–8 as all mean­ing some­thing similar to 4. Per­son­ally, I think of them as mean­ing some­thing even weaker than 4, and have no ob­jec­tion to de­scribing, e.g., Google, or even a Bayesian spam filter, as “in­tel­li­gent”. Searle seems to want them to mean the same as 5, or maybe some con­junc­tion of 4 and 5. But in coun­ter­in­tu­itive edge cases like the Chi­nese Room, they don’t mean any­thing at all un­til you as­sign defi­ni­tions to them.

    I am not cer­tain whether or not Searle would agree with my be­lief that it is pos­si­ble for a Tur­ing ma­chine to cor­rectly an­swer ques­tions about what qualia a hu­man is ex­pe­rienc­ing, given a com­plete phys­i­cal de­scrip­tion of that hu­man. If he takes the nega­tive po­si­tion on this, then this is a se­ri­ous dis­agree­ment that goes be­yond se­man­tics, but I can­not tell that he has ever com­mit­ted him­self to ei­ther stance.

    Now, there re­mains a pos­si­ble ar­gu­ment that might seem to save the simu­la­tion hy­poth­e­sis even in the ab­sence of sub­strate-in­de­pen­dence. “Okay,” you say, “you’ve per­suaded me that a hu­man-simu­la­tor built of sili­con chips would not ex­pe­rience the same qualia as the hu­man it simu­lates. But you can’t tell me that it doesn’t ex­pe­rience any qualia. For all you or I know, a lump of coal ex­pe­riences qualia of some sort. So, let’s say you’re in fact liv­ing in a simu­la­tion im­ple­mented in sili­con. You’re ex­pe­rienc­ing qualia, but those qualia are all wrong com­pared to what you as a car­bon-based bag of meat ought to be ex­pe­rienc­ing. How would you know any­thing is wrong? How, other than by life ex­pe­rience, do you know what the right qualia for a bag of meat ac­tu­ally are?”

    The an­swer is that I know my qualia are right be­cause they make sense. Qualia are not pure “out­puts”: they feed back on the rest of the world. If I step out­side on a scorch­ing sum­mer day, then I feel hot, and this un­pleas­ant quale causes me to go back in­side, and I am able to un­der­stand and ar­tic­u­late this cause and effect. If my qualia were ac­tu­ally those of a com­puter chip, then rather than feel­ing hot I would feel pur­ple (or rather, some quale that no hu­man lan­guage can de­scribe), and if you asked me why I went back in­doors even though I don’t have any par­tic­u­lar ob­jec­tion to pur­ple and the weather is not nearly se­vere enough to pose any se­ri­ous threat to my health, I wouldn’t be able to an­swer you or in any way con­nect my qualia to my ac­tions.

    So, I think I have now es­tab­lished that to any ex­tent we can be said to be liv­ing in a simu­la­tion, the simu­la­tor must phys­i­cally in­cor­po­rate a hu­man brain. I have not pre­cluded the pos­si­bil­ity of a simu­la­tion in the vein of “The Ma­trix”, with a brain-in-a-vat be­ing fed ar­tifi­cial sen­sory in­puts. I think this kind of simu­la­tion is in­deed pos­si­ble in prin­ci­ple. How­ever, noth­ing claimed in Bostrom’s simu­la­tion ar­gu­ment would sug­gest that it is at all likely.

    ETA: A ques­tion that I’ve put to Side­ways can be similarly put to many other com­menters on this thread. “Similar in num­ber”, i.e., two ap­ples, two or­anges, etc., is, similarly to “em­body­ing the same com­pu­ta­tion”, an ab­stract con­cept which can be re­al­ized by a wide va­ri­ety of phys­i­cal me­dia. Yet, if I re­placed the two hemi­spheres of your brain with two ap­ples, clearly you would be­come quite ill, even though similar­ity in num­ber has been pre­served. If you be­lieve that “em­body­ing the same com­pu­ta­tion” is some­how a priv­ileged con­cept in this re­gard—that if I re­placed your brain with some­thing else em­body­ing the same com­pu­ta­tion that you would feel your­self to be un­harmed—what is your jus­tifi­ca­tion for be­liev­ing this?