Consciousness of simulations & uploads: a reductio

Related articles: Nonperson predicates, Zombies! Zombies?, & many more.

ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.

ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.

Consciousness belongs to a class of topics I think of as my ‘sore teeth.’ I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.

Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.

Simulating a person

The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer’s bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)

Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it’s probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.

Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.

Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions—questions like “Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?”—and get answers.

I’m almost certain she’ll say “Yes.” (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)

The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said “Of course!” because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.

A different kind of simulation

There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I’d say).

(NB: The next few paragraphs are the crucial part of this argument.)

Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).

So what’s to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you’re kind to me, a calculator. Atom by tedious atom, I’ll simulate inputs to Simone’s auditory system asking her if she’s conscious, then compute her (physically determined) answer to that question.

Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.

Once again, Simone will claim she’s conscious.

...Yeah, I’m sorry, but I just don’t believe her.

I don’t claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don’t even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.

Oops!

Pigliucci is going to enjoy watching me eat my hat.

What was our mistake?

I’ve thought about this a lot in the last ~10 hours since I came up with the above.

I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...

...only it’s not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).

Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn’t mean much.

In retrospect, it should have given us pause that the physical process happening in the computer—zeroes and ones propagating along wires & through transistors—can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn’t exist.

Another way of putting it is that, if consciousness is “how the algorithm feels from the inside,” a simulated consciousness is just not following the same algorithm.

But what about the Fading Qualia argument?

The fading qualia argument is another thought experiment, this one by David Chalmers.

Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don’t worry, it still outputs the same electrical signals along the axons; your behaviour won’t be affected.

Then we do this for a second neuron.

Then a third, then a kth… until your brain contains only artificial neurons (N of them, where N ≈ 1011).

Now, what happens to your conscious experience in this process? A few possibilities arise:

  1. Conscious experience is initially the same, then shuts off completely at some discrete number of replaced neurons: maybe 1, maybe N/​2. Rejected by virtue of being ridiculously implausible.

  2. Conscious experience fades continuously as k → N. Certainly more plausible than option 1, but still very strange. What does “fading” consciousness mean? Half a visual field? A full visual field with less perceived light intensity? Having been prone to (anemia-induced) loss of consciousness as a child, I can almost convince myself that fading qualia make some sort of sense, but not really...

  3. Conscious experience is unaffected by the transition.

Unlike (apparently) Chalmers, I do think that “fading qualia” might mean something, but I’m far from sure. 3 does seem like a better bet. But what’s the difference between a brain full of individual silicon neurons, and a brain simulated on general-purpose silicon chips?
I think the salient difference is that, in a biological brain and an artificial-neuron brain, patterns of energy and matter flow are similar. Picture an impulse propagating along an axon: that process is physically very similar in the two types of physical brain.
When we simulate a brain on a general purpose computer, however, there is no physically similar pattern of energy/​matter flow. If I had to guess, I suspect this is the rub: you must need a certain physical pattern of energy flow to get consciousness.
More thought is needed in clarifying the exact difference between saying “consciousness arises from patterns of energy flow in the brain,” and “consciousness arises from patterns of graphite on paper.” I think there is definitely a big difference, but it’s not crystal clear to me in what exactly it consists.