We are not living in a simulation

The aim of this post is to challenge Nick Bostrom’s simulation argument by attacking the premise of substrate-independence. Quoting Bostrom in full, this premise is explained as follows:

A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) -- just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).

I contend that this premise, in even its weakest formulation, is utterly, unsalvageably false.

Since Bostrom never precisely defines what a “simulator” is, I will apply the following working definition: a simulator is a physical device which assists a human (or posthuman) observer with deriving information about the states and behavior of a hypothetical physical system. A simulator is “perfect” if it can respond to any query about the state of any point or volume of simulated spacetime with an answer that is correct according to some formal mathematical model of the laws of physics, with both the query and the response encoded in a language that it is easily comprehensible to the simulator’s [post]human operator. We can now formulate the substrate independence hypothesis as follows: any perfect simulator of a conscious being experiences the same qualia as that being.

Let us make a couple observations about these definitions. First: if the motivation for our hypothetical post-Singularity civilization to simulate our universe is to study it, then any perfect simulator should provide them with everything necessary toward that end. Second: the substrate independence hypothesis as I have defined it is much weaker than any version which Bostrom proposes, for any device which perfectly simulates a human must necessarily be able to answer queries about the state of the human’s brain, such as what synapses are firing at what time, as well as any other structural question right down to the Planck level.

Much of the ground I am about to cover has been tread in the past by John Searle. I will explain later in this post where it is that I differ with him.

Let’s consider a “hello universe” example of a perfect simulator. Suppose an essentially Newtonian universe in which matter is homogeneous at all sufficiently small scales; i.e., there are either no quanta, or quanta simply behave like billiard balls. Gravity obeys the familiar inverse-square law. The only objects in this universe are two large spheres orbiting each other. Since the two-body problem has an easy closed-form solution, it is hypothetically straightforward to program a Turing machine to act as a perfect simulator of this universe, and furthermore an ordinary present-day PC can be an adequate stand-in for a Turing machine so long as we don’t ask it to make its answers precise to more decimal places than fit in memory. It would pose no difficulty to actually implement this simulator.

If you ran this simulator with Jupiter-sized spheres, it would reason perfectly about the gravitational effects of those spheres. Yet, the computer would not actually produce any more gravity than it would while powered off. You would not be sucked toward your CPU and have your body smeared evenly across its surface. In order for that happen, the simulator would have to mimic the simulated system in physical form, not merely computational rules. That is, it would have to actually have two enormous spheres inside of it. Such a machine could still be a “simulator” in the sense that I’ve defined the term — but in colloquial usage, we would stop calling this a simulator and instead call it the real thing.

This observation is an instance of a general principle that ought be very, very obvious: reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map.

Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don’t agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a “nonphysical” “soul” or whatnot (I don’t know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I’ve just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Hence, the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator. A machine which walks the way a human walks must have the form of a human leg. A machine which grips the way a human grips must have the form of a human hand. And a machine which experiences the way a human experiences must have the form of a human brain.

For an example of my claim, let us suppose like Bostrom does that a simulation which correctly models brain activity down to the level of individual synaptic discharges is sufficient in order model all the essential features of human consciousness. What does that tell us about what would be required in order to build an artificial human? Here is one design that would work: first, write a computer program, running on (sufficiently fast) conventional hardware, which correctly simulates synaptic activity in a human brain. Then, assemble millions of tiny spark plugs, one per dendrite, into the physical configuration of a human brain. Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.

Alternatively, what if granularity right down to the Planck level turned out to be necessary? In that case, the only way to build an artificial brain would to be to actually build, particle-for-particle, a brain — since due to speed-of-light limitations, no other design could possibly model everything it needed to model in real time.

I think that actual requisite granularity is probably somewhere in between. The spark plug design seems too crude to work, while Planck-level correspondence is certainly overkill, because otherwise, the tiniest fluctuation in our surrounding environment, such as a .01 degree change in room temperature, would have a profound impact on our mental state.

Now, from here on is where I depart from Searle if I have not already. Consider the following questions:

  1. If a tree falls in the forest and nobody hears it, does it make an acoustic vibration?

  2. If a tree falls in the forest and nobody hears it, does it make an auditory sensation?

  3. If a tree falls in the forest and nobody hears it, does it make a sound?

  4. Can the Chinese Room (.pdf link) pass a Turing test administered in Chinese?

  5. Does the Chinese Room experience the same qualia that a Chinese-speaking human would experience when replying to a letter written in Chinese?

  6. Does the Chinese Room understand Chinese?

  7. Is the Chinese Room intelligent?

  8. Does the Chinese Room think?

Here is the answer key:

  1. Yes.

  2. No.

  3. What do you mean?

  4. Yes.

  5. No.

  6. What do you mean?

  7. What do you mean?

  8. What do you mean?

    The problem with Searle is his lack of any clear answer to “What do you mean?”. Most technically-minded people, myself included, think of 6–8 as all meaning something similar to 4. Personally, I think of them as meaning something even weaker than 4, and have no objection to describing, e.g., Google, or even a Bayesian spam filter, as “intelligent”. Searle seems to want them to mean the same as 5, or maybe some conjunction of 4 and 5. But in counterintuitive edge cases like the Chinese Room, they don’t mean anything at all until you assign definitions to them.

    I am not certain whether or not Searle would agree with my belief that it is possible for a Turing machine to correctly answer questions about what qualia a human is experiencing, given a complete physical description of that human. If he takes the negative position on this, then this is a serious disagreement that goes beyond semantics, but I cannot tell that he has ever committed himself to either stance.

    Now, there remains a possible argument that might seem to save the simulation hypothesis even in the absence of substrate-independence. “Okay,” you say, “you’ve persuaded me that a human-simulator built of silicon chips would not experience the same qualia as the human it simulates. But you can’t tell me that it doesn’t experience any qualia. For all you or I know, a lump of coal experiences qualia of some sort. So, let’s say you’re in fact living in a simulation implemented in silicon. You’re experiencing qualia, but those qualia are all wrong compared to what you as a carbon-based bag of meat ought to be experiencing. How would you know anything is wrong? How, other than by life experience, do you know what the right qualia for a bag of meat actually are?”

    The answer is that I know my qualia are right because they make sense. Qualia are not pure “outputs”: they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don’t have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn’t be able to answer you or in any way connect my qualia to my actions.

    So, I think I have now established that to any extent we can be said to be living in a simulation, the simulator must physically incorporate a human brain. I have not precluded the possibility of a simulation in the vein of “The Matrix”, with a brain-in-a-vat being fed artificial sensory inputs. I think this kind of simulation is indeed possible in principle. However, nothing claimed in Bostrom’s simulation argument would suggest that it is at all likely.

    ETA: A question that I’ve put to Sideways can be similarly put to many other commenters on this thread. “Similar in number”, i.e., two apples, two oranges, etc., is, similarly to “embodying the same computation”, an abstract concept which can be realized by a wide variety of physical media. Yet, if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that “embodying the same computation” is somehow a privileged concept in this regard—that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed—what is your justification for believing this?