Consciousness of simulations & uploads: a reductio

Re­lated ar­ti­cles: Non­per­son pred­i­cates, Zom­bies! Zom­bies?, & many more.

ETA: This ar­gu­ment ap­pears to be a re­hash of the Chi­nese room, which I had pre­vi­ously thought had noth­ing to do with con­scious­ness, only in­tel­li­gence. I nonethe­less find this one in­struc­tive in that it makes cer­tain things ex­plicit which the Chi­nese room seems to gloss over.

ETA2: I think I may have made a mis­take in this post. That mis­take was in re­al­iz­ing what on­tol­ogy func­tion­al­ism would im­ply, and think­ing that on­tol­ogy too weird to be true. An ar­gu­ment from in­cre­dulity, es­sen­tially. Dou­ble oops.

Con­scious­ness be­longs to a class of top­ics I think of as my ‘sore teeth.’ I find my­self think­ing about them all the time: in the mid­dle of bathing, run­ning, cook­ing. I keep think­ing about con­scious­ness be­cause no mat­ter how much I read on the sub­ject, I find I am still con­fused.

Now, to the heart of the mat­ter. A ma­jor claim on which the de­sir­a­bil­ity of up­load­ing (among other things) de­pends, is that the up­load would be con­scious (as dis­tinct from in­tel­li­gent). I think I found a re­duc­tio of this claim at about 4:00 last night while star­ing up at my bed­room ceiling.

Si­mu­lat­ing a person

The thought ex­per­i­ment that is sup­posed to show us that the up­load is con­scious goes as fol­lows. (You can see an ap­plied ver­sion in Eliezer’s blog­ging­heads de­bate with Mas­simo Pigliucci, here. I also made a similar ar­gu­ment to Mas­simo here.)

Let us take an un­for­tu­nate mem­ber of the pub­lic, call her Si­mone, and simu­late her brain (plus in­puts and out­puts along the ner­vous sys­tem) on an ar­bi­trar­ily pow­er­ful philo­soph­i­cal su­per­com­puter (this also works if you simu­late her whole body plus sur­round­ings). This simu­la­tion can be at any level of com­plex­ity you like, but it’s prob­a­bly best if we stick to an atom-by-atom (or com­plex am­pli­tudes) ap­proach, since that leaves less room for doubt.

Since Si­mone is a lawful en­tity within physics, there ought to be noth­ing in prin­ci­ple stop­ping us from do­ing so, and we should get be­havi­oural iso­mor­phism be­tween the simu­la­tion and the biolog­i­cal Si­mone.

Now, we can also simu­late in­puts and out­puts to and from the vi­sual, au­di­tory and lan­guage re­gions of her brain. It fol­lows that with the right ex­per­tise, we can ask her ques­tions—ques­tions like “Are you ex­pe­rienc­ing the sub­jec­tive feel­ing of con­scious­ness you had when you were in a biolog­i­cal body?”—and get an­swers.

I’m al­most cer­tain she’ll say “Yes.” (Take a mo­ment to re­al­ize why the al­ter­na­tive, if we take her at her word, im­plies Carte­sian du­al­ism.)

The ques­tion is, do we be­lieve her when she says she is con­scious? 10 hours ago, I would have said “Of course!” be­cause the idea of a simu­la­tion of Si­mone that is 100% be­havi­ourally iso­mor­phic and yet un­con­scious seemed very coun­ter­in­tu­itive; not ex­actly a p-zom­bie by virtue of not be­ing atom-by-atom iden­ti­cal with Si­mone, but definitely in zom­bie ter­ri­tory.

A differ­ent kind of simulation

There is an­other way to do this thought ex­per­i­ment, how­ever, and it does not re­quire that in­finitely pow­er­ful com­puter the philos­o­phy de­part­ment has (the best in­vest­ment in the his­tory of academia, I’d say).

(NB: The next few para­graphs are the cru­cial part of this ar­gu­ment.)

Ob­serve that ul­ti­mately, the com­puter simu­la­tion of Si­mone above would out­put noth­ing but a huge se­quence of ze­roes and ones, pro­cess them into vi­sual and au­dio out­puts, and spit them out of a mon­i­tor and speak­ers (or what­ever).

So what’s to stop me just sit­ting down and crunch­ing the num­bers my­self? All I need is a stu­pen­dous amount of time, a lot of pen­cils, a lot (!!!) of pa­per, and if you’re kind to me, a calcu­la­tor. Atom by te­dious atom, I’ll simu­late in­puts to Si­mone’s au­di­tory sys­tem ask­ing her if she’s con­scious, then com­pute her (phys­i­cally de­ter­mined) an­swer to that ques­tion.

Take a mo­ment to con­vince your­self that there is noth­ing sub­stan­tively differ­ent be­tween this sce­nario and the pre­vi­ous one, ex­cept that it con­tains ap­prox­i­mately 10,000 times the max­i­mum safe dosage of in prin­ci­ple.

Once again, Si­mone will claim she’s con­scious.

...Yeah, I’m sorry, but I just don’t be­lieve her.

I don’t claim cer­tain knowl­edge about the on­tol­ogy of con­scious­ness, but if I can sum­mon forth a sub­jec­tive con­scious­ness ex nihilo by mak­ing the right se­ries of graphite squig­gles (which don’t even mean any­thing out­side hu­man minds), then we might as well just give up and ad­mit con­scious­ness is magic.


Pigliucci is go­ing to en­joy watch­ing me eat my hat.

What was our mis­take?

I’ve thought about this a lot in the last ~10 hours since I came up with the above.

I think when we imag­ined a simu­lated hu­man brain, what we were pic­tur­ing in our imag­i­na­tions was a vi­sual rep­re­sen­ta­tion of the simu­la­tion, like a scene in Se­cond Life. We saw men­tal images of simu­lated elec­tri­cal im­pulses prop­a­gat­ing along simu­lated neu­rons, and the cause & effect in that image is pretty clear...

...only it’s not. What we should have been pic­tur­ing was a whole se­ries of log­i­cal op­er­a­tions hap­pen­ing all over the place in­side the com­puter, with no phys­i­cal re­la­tion be­tween them and the rep­re­sented ba­sic units of the simu­la­tion (atoms, or what­ever).

Ba­si­cally, the simu­lated con­scious­ness was iso­mor­phic to biolog­i­cal con­scious­ness in a similar way to how my shadow is iso­mor­phic to me. Just like the simu­la­tion, if I spoke ASL I could get my shadow to claim con­scious aware­ness, but it wouldn’t mean much.

In ret­ro­spect, it should have given us pause that the phys­i­cal pro­cess hap­pen­ing in the com­puter—ze­roes and ones prop­a­gat­ing along wires & through tran­sis­tors—can only be re­lated to con­scious­ness by virtue of out­siders choos­ing the right in­ter­pre­ta­tions (in their own heads!) for the sym­bols be­ing ma­nipu­lated. Maybe if you in­ter­pret that stream of ze­roes and ones differ­ently, it out­puts 5-day weather pre­dic­tions for a city that doesn’t ex­ist.

Another way of putting it is that, if con­scious­ness is “how the al­gorithm feels from the in­side,” a simu­lated con­scious­ness is just not fol­low­ing the same al­gorithm.

But what about the Fad­ing Qualia ar­gu­ment?

The fad­ing qualia ar­gu­ment is an­other thought ex­per­i­ment, this one by David Chalmers.

Essen­tially, we strap you into a chair and open up your skull. Then we re­place one of your neu­rons with a sili­con-based ar­tifi­cial neu­ron. Don’t worry, it still out­puts the same elec­tri­cal sig­nals along the ax­ons; your be­havi­our won’t be af­fected.

Then we do this for a sec­ond neu­ron.

Then a third, then a kth… un­til your brain con­tains only ar­tifi­cial neu­rons (N of them, where N ≈ 1011).

Now, what hap­pens to your con­scious ex­pe­rience in this pro­cess? A few pos­si­bil­ities arise:

  1. Con­scious ex­pe­rience is ini­tially the same, then shuts off com­pletely at some dis­crete num­ber of re­placed neu­rons: maybe 1, maybe N/​2. Re­jected by virtue of be­ing ridicu­lously im­plau­si­ble.

  2. Con­scious ex­pe­rience fades con­tin­u­ously as k → N. Cer­tainly more plau­si­ble than op­tion 1, but still very strange. What does “fad­ing” con­scious­ness mean? Half a vi­sual field? A full vi­sual field with less per­ceived light in­ten­sity? Hav­ing been prone to (ane­mia-in­duced) loss of con­scious­ness as a child, I can al­most con­vince my­self that fad­ing qualia make some sort of sense, but not re­ally...

  3. Con­scious ex­pe­rience is un­af­fected by the tran­si­tion.

Un­like (ap­par­ently) Chalmers, I do think that “fad­ing qualia” might mean some­thing, but I’m far from sure. 3 does seem like a bet­ter bet. But what’s the differ­ence be­tween a brain full of in­di­vi­d­ual sili­con neu­rons, and a brain simu­lated on gen­eral-pur­pose sili­con chips?
I think the salient differ­ence is that, in a biolog­i­cal brain and an ar­tifi­cial-neu­ron brain, pat­terns of en­ergy and mat­ter flow are similar. Pic­ture an im­pulse prop­a­gat­ing along an axon: that pro­cess is phys­i­cally very similar in the two types of phys­i­cal brain.
When we simu­late a brain on a gen­eral pur­pose com­puter, how­ever, there is no phys­i­cally similar pat­tern of en­ergy/​mat­ter flow. If I had to guess, I sus­pect this is the rub: you must need a cer­tain phys­i­cal pat­tern of en­ergy flow to get con­scious­ness.
More thought is needed in clar­ify­ing the ex­act differ­ence be­tween say­ing “con­scious­ness arises from pat­terns of en­ergy flow in the brain,” and “con­scious­ness arises from pat­terns of graphite on pa­per.” I think there is definitely a big differ­ence, but it’s not crys­tal clear to me in what ex­actly it con­sists.