The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence

Epistemic Sta­tus: The fol­low­ing seems plau­si­ble to me, but it’s com­plex enough that I might have made some mis­takes. More­over, it goes against the be­liefs of many peo­ple much smarter than my­self. Thus cau­tion is ad­vised, and com­men­tary is ap­pre­ci­ated.

I.

In this post, I aim to make a philo­soph­i­cal ar­gu­ment that we (or any­one) can­not use simu­la­tion to cre­ate new con­scious­nesses (or, for that mat­ter, to copy ex­ist­ing peo­ple’s con­scious­nesses so as to give them simu­lated plea­sure or pain). I here make a dis­tinc­tion be­tween “some­thing that acts like it is con­scious,” (e.g. what is com­monly known as a ‘p-zom­bie’) and “some­thing that ex­pe­riences qualia.” Only the lat­ter is rele­vant to what I mean when I say some­thing is ‘con­scious’ through­out this post. In other words, con­scious­ness here refers to the qual­ity of ‘hav­ing the lights on in­side’, and as a re­sult it re­lates as well to whether or not an en­tity is a moral pa­tient (i.e. can it feel pain? Can it feel plea­sure? If so, it is im­por­tant that we treat it right).

If my ar­gu­ment holds, then this would be a so-named ‘cru­cial con­sid­er­a­tion’ to those who are con­cerned about simu­la­tion. It would mean that no one can make the threat of hurt­ing us in some simu­la­tion, nor can one promise to re­ward us in such a vir­tual space. How­ever, we our­selves might still ex­ist in some higher world’s simu­la­tion (in a man­ner similar to what is de­scribed in SlateS­tarCodex’s ‘The View from the Ground Level’). Fi­nally, since one con­se­quence of my con­clu­sion is that there is no moral down­side to simu­lat­ing be­ings that suffer, one might pre­fer to level a Pas­cal’s Wager-like ar­gu­ment against me and say that un­der con­di­tions of em­piri­cal and moral un­cer­tainty, the moral con­se­quences of ac­cept­ing this ar­gu­ment (i.e. treat­ing simu­lated minds as not ca­pa­ble of suffer­ing) would be ex­treme, whereas grant­ing simu­lated minds too much re­spect has fewer down­sides.


Without fur­ther ado...


II.

Let us first dis­t­in­guish two pos­si­ble wor­lds. In the first, simu­lat­ing con­scious­nesses [in any non-nat­u­ral state] is sim­ply im­pos­si­ble. That is to say, the only level on which con­scious­nesses may ex­ist, is the real, phys­i­cal level that we see around us. No other realms may be said to ‘ex­ist’; all other spaces are mere in­for­ma­tion—they are fic­tion, not real. Na­ture may have the power to cre­ate con­scious­nesses, but not us: No mat­ter how hard we try, we are for­ever un­able to in­stan­ti­ate ar­tifi­cial con­scious­nesses. If this is the world we live in, then the Ca­cophony hy­pothe­ses is already coun­ter­fac­tu­ally proven.


So let us say that we live in the sec­ond type of world: One where con­scious­nesses may ex­ist not merely in what is di­rectly phys­i­cal, but may be in­stan­ti­ated also in the realm of in­for­ma­tion. Ones and ze­roes by them­selves are just num­bers, but if you rep­re­sent them with tran­sis­tors and in­ter­pret them with the right rules, then you will find that they define code, pro­grams, mod­els, simu­la­tions—un­til, fi­nally, the level of de­tail (or com­plex­ity, or what­ever is re­quired) is so high that con­scious­nesses are be­ing simu­lated.


In this world, what is the right sub­strate (or in­put) on which this simu­la­tion may take place? And what are the rules by which it may be calcu­lated?

Some hold that the sub­strate is me­chan­i­cal: ones and ze­roes, em­bed­ded on cop­per, lead, sili­con, and gold. But the Church-Tur­ing the­sis tells us that all suffi­ciently ad­vanced com­put­ers are equally pow­er­ful. What may be simu­lated on ones and ze­roes, may be simu­lated as well by com­bi­na­tions of colours, or ges­tures, or any­thing that has some man­ner of in­for­ma­tional con­tent. The effects—that is, the com­pu­ta­tions that are performed—would re­main the same. The sub­strate may be paint, or peo­ple, or truly any­thing in the world, so long as it is in­ter­preted in the right way. (See also Max Teg­mark’s ex­pla­na­tion of this idea, which he calls Sub­strate-In­de­pen­dence.)

And what­ever makes a simu­la­tion run—the func­tions that take such in­puts, and turn them into al­ter­nate simu­lated re­al­ities where con­scious­nesses may reside—who says that the only way this could hap­pen is by in­ter­pret­ing a string of bits in the ex­act way that a com­puter would in­ter­pret it? How small is the chance that out of all in­finite pos­si­ble func­tions, the only func­tion that ac­tu­ally works is ex­actly that func­tion that we’ve ar­bi­trar­ily cho­sen to ap­ply to com­put­ers, and which we com­monly ac­cept as hav­ing the po­ten­tial for suc­cess?


III.

There are in­nu­mer­ably many in­ter­pre­ta­tions of a chang­ing string of ones and ze­roes, of red and blue, of gasps and sighs. Com­put­ers have one con­sis­tent rule­set which tells them how to in­ter­pret bits; we may call this rule­set, ‘R’. How­ever, surely we might have cho­sen many other rule­sets. Sim­ple ones, like “11 means 1 and 00 means 0, and in­ter­pret the re­sult of this with R” are (by the Church-Tur­ing the­sis) equally pow­er­ful in­so­far as their abil­ity to even­tu­ally cre­ate con­scious­nesses goes. Slightly more com­plex ones, such as “0 means 101 and 1 means 011, and in­ter­pret the re­sult of this with R” may also be con­sis­tent, pro­vided that we un­pack the in­put in this man­ner. And we need not limit our­selves to rule­sets that make use of R: Any con­sis­tent rule­set, no mat­ter how com­plex, may ap­ply. What about the rule, “1 simu­lates the en­tirety of Alice, who is now a real simu­lated per­son”? Is this a valid func­tion? Is there any point at which in­creas­ing the com­plex­ity of an in­ter­pre­ta­tion rule, given some in­put, makes it lose the power to simu­late? Or may any­thing that a vast com­puter net­work can simu­late, be en­coded into a sin­gle bit and un­packed from this, pro­vided that we read it with the right in­ter­pre­ta­tion func­tion? Yes ---- of course that is the case: All com­plex­ity that may be con­tained in some in­put data ‘X’, may in­stead be off-loaded into a func­tion which says “Given any bit of in­for­ma­tion, I re­turn that data ‘X’.”


We are thus led to an in­ex­orable con­clu­sion:

  1. Every pos­si­ble com­bi­na­tion of ab­solutely any­thing that ex­ists, is valid in­put.

  2. Any set of func­tions is a valid set of func­tions—and the math­e­mat­i­cal in­for­ma­tion space of all pos­si­ble sets of func­tions, is vast in­deed.

  3. As such, an in­finite num­ber of simu­la­tions of all kinds are hap­pen­ing con­stantly, all around us. After all, if one func­tion (R) can take one type of in­put (ones and ze­roes, en­coded on tran­sis­tors) and re­turn a simu­la­tion-re­al­ity, then who is to say that not for all in­puts there ex­ist in­finitely many func­tions that can op­er­ate on it to this same effect?

Un­der this view, the world is a ca­cophony of simu­la­tions, of re­al­ities all ex­ist­ing in in­for­ma­tion space, in­visi­ble to our eyes un­til they we may ac­cess them through func­tional in­ter­pre­ta­tion meth­ods.

IV.

This leads us to the next ques­tion: What does it mean for some­one to run a simu­la­tion, now?


In Borges’ short story, “The Library of Ba­bel,” there ex­ists a library con­tain­ing ev­ery book that could ever be: It is a phys­i­cal rep­re­sen­ta­tion of the vast in­for­ma­tion space that is all com­bi­na­tions of let­ters, punc­tu­a­tion marks, and spe­cial char­ac­ters. It is now non­sen­si­cal to say that a writer cre­ates a book: The book has always ex­isted, and the writer merely gives us a refer­ence to some lo­ca­tion within this library at which the book may be found.


In the same way, all simu­la­tions already ex­ist. Si­mu­la­tions are af­ter all just cer­tain con­figu­ra­tions of in­for­ma­tion, in­ter­preted in cer­tain in­for­ma­tional ways—and all in­for­ma­tion already ex­ists, in the same realm that e.g. num­bers (which are them­selves in­for­ma­tion) in­habit. One does not cre­ate a simu­la­tion; one merely gives a refer­ence to some simu­la­tion in in­for­ma­tion space. The idea of cre­at­ing a new simu­la­tion is as non­sen­si­cal as the idea of cre­at­ing a new book, or a new num­ber; all these struc­tures of in­for­ma­tion already ex­ist; you can­not cre­ate them, only refer­ence them.


But could not con­scious­nesses, like books, be copied? Here we run into the clas­si­cal prob­lem of whether there can ex­ist mul­ti­ple in­stances of a sin­gle in­for­ma­tional ob­ject. If there may not be, and all copies of a con­scious­ness are merely poin­t­ers to a sin­gle ‘real’ con­scious­ness, in the same way that all copies of a book may be un­der­stood to be poin­t­ers to a sin­gle ‘real’ book, then this is not a prob­lem. We then would end up with the con­clu­sion that any kind of simu­la­tion is pow­er­less: Whether you simu­late some con­scious­ness or not, it (and in­deed ev­ery­thing!) is already be­ing simu­lated.


So sup­pose in­stead that mul­ti­ple real, valid copies of a con­scious­ness may ex­ist. That is to say: the differ­ence be­tween there be­ing one copy of Bob, and there be­ing ten copies of Bob, is that in the lat­ter situ­a­tion, there ex­ists more pain and joy—namely that which the simu­lated Bobs are feel­ing—than there is in the former situ­a­tion. Could we then not still con­clude that run­ning simu­la­tions cre­ates con­scious­nesses, and thus the act of run­ning a simu­la­tion is one that has moral weight?


To re­fute this, a thought ex­per­i­ment. Sup­pose that a mal­i­cious AI shows you that it is run­ning a simu­la­tion of you, and threat­ens to hurt sim!you if you don’t do X. What power does it now have over you? What differ­ences are there be­tween the situ­a­tion where it hurts sim!you, and the one where it re­wards sim!you?

The AI is us­ing one stream of data and in­ter­pret­ing it in one way (prob­a­bly with rule­set R); this com­bi­na­tion of in­put and pro­cess­ing rules re­sults in a simu­la­tion of ‘you’. In par­tic­u­lar, be­cause it has ac­cess to both the in­put and the in­ter­pre­ta­tion func­tion, it can view the simu­la­tion and show it to you. But on that same in­put there acts, in­visi­bly to us, an­other set of rules (speci­fied here out of in­finitely many sets of rules, all of which are si­mul­ta­neously act­ing on this in­put), which re­sults in a slightly differ­ent simu­la­tion of you. This sec­ond set of rules is differ­ent in such a way, that if the AI hurts sim!you (an act which, one should note, changes the in­put; rule­set R re­mains the same), then in the sec­ond simu­la­tion, based on this in­put, you are re­warded, and vice versa. Now there are two simu­la­tions on­go­ing, both real and in­hab­ited by a simu­lated ver­sion of you, both run­ning on a sin­gle set of tran­sis­tors. The AI can­not change that in one of these two simu­la­tions, you are hurt, and in the other, you are not; it can only change which one it chooses to show you.


In­deed: For ev­ery func­tion which simu­lates, on some in­put, a con­scious­ness that is suffer­ing, there is an­other func­tion which, on this same in­put, simu­lates that same con­scious­ness ex­pe­rienc­ing plea­sure. Or, more gen­er­ally and more for­mally stated: When­ever the AI de­cides to simu­late X, then for any other pos­si­ble con­scious­ness or situ­a­tion Y that is not X, there ex­ists a func­tion which takes the in­put of “The AI is simu­lat­ing X”, and which sub­se­quently simu­lates Y. (In­ci­den­tally, the func­tion which takes this same in­put, and which then re­turns a simu­la­tion of X, is ex­actly that func­tion that we usu­ally un­der­stand to be ‘simu­la­tion’, namely R. How­ever, as noted, R is just one out of in­finitely many func­tions.)


V.

As such, in this sec­ond world, re­al­ity is cur­rently run­ning un­countable billions of copies of any simu­la­tion that one may come up with, and any at­tempt to add one simu­la­tion-copy to re­al­ity, re­sults in­stead in a new re­al­ity-state in which ev­ery simu­la­tion-copy has been added. Do not fret, you are not cul­pable: af­ter all, any at­tempt to do any­thing other than adding a simu­la­tion-copy, also re­sults in this same new re­al­ity-state. This is be­cause any pos­si­ble in­put, when given to the set of all pos­si­ble rules or func­tions, yields ev­ery pos­si­ble re­sult; thus it does not mat­ter what in­put you give to re­al­ity, whether that is run­ning simu­la­tion X, or run­ning simu­la­tion Y, or even do­ing act Z, or not do­ing act Z.


In­for­ma­tional space is in­finite. Even if we limit our phys­i­cal sub­strate to tran­sis­tors set to ones or ze­roes, we may still come up with limitless func­tions be­sides R, that to­gether achieve this above re­sult. In run­ning com­pu­ta­tions, we don’t change what is be­ing simu­lated, we don’t change what ‘ex­ists’. We merely open a win­dow onto some piece of in­for­ma­tion. In math­e­mat­i­cal space, ev­ery­thing already ex­ists. We are not ac­tors, but ob­servers: We do not cre­ate num­bers, or func­tions, or even ap­pli­ca­tions of func­tions on num­bers; we merely calcu­late, and view the re­sults.


To sum­ma­rize:

  1. If simu­la­tion is pos­si­ble on some sub­strate with some rule, then it is pos­si­ble on any sub­strate with any rule. More­over, simu­la­tion space, like Borges’ Library and num­ber space, ex­ist as much as they’re ever go­ing to ex­ist; all pos­si­ble simu­la­tions are already ex­tant and run­ning.

  2. At­tempt­ing to run ‘ex­tra’ simu­la­tions on top of what re­al­ity is already simu­lat­ing, is use­less, be­cause your act of simu­lat­ing X is in­ter­preted by re­al­ity as in­put on which it simu­lates X and ev­ery­thing else, and your act of not simu­lat­ing X, is also in­ter­preted by re­al­ity as in­put on which it simu­lates X and ev­ery­thing else.

It should be noted that simu­la­tions are still use­ful, in the same way that do­ing any kind of maths is use­ful: Amidst the in­finite ex­panses of pos­si­ble out­puts, math­e­mat­i­cal pro­cesses high­light those out­puts which you are in­ter­ested in. There are in­finitely many num­bers, but the right func­tion with the right in­put can still give you con­crete in­for­ma­tion. In the same way, if some­one is simu­lat­ing your mind, then even though they can­not cause any pain or re­ward that would not already ‘ex­ist’ any­way, they can now nonethe­less read your mind, and from this gain much in­for­ma­tion about you.


Thus simu­la­tion is still a very pow­er­ful tool.


But the idea that simu­la­tion can be used to con­jure new con­scious­nesses into ex­is­tence, seems to me to be based on a fun­da­men­tal mi­s­un­der­stand­ing of what in­for­ma­tion is.


[A note of clar­ifi­ca­tion: One might ar­gue that my ar­gu­ment does not suc­cess­fully make the jump from phys­i­cally-defined in­puts, such as a set of tran­sis­tors rep­re­sent­ing ones and ze­roes, to sym­bol­i­cally-defined meta-phys­i­cal in­puts, such as “whether or not X is be­ing simu­lated.” This would be a per­ti­nent ar­gu­ment, since my line of rea­son­ing de­pends cru­cially on this sec­ond type of in­put. To this hy­po­thet­i­cal ar­gu­ment, I would counter that any such sym­bolic in­put has to ex­ist fully in nat­u­ral, phys­i­cal re­al­ity in some man­ner: “X is be­ing simu­lated” is a state­ment about the world which we might, given the tools (and know­ing for each func­tion what in­put to search for—this is tech­ni­cally com­putable), phys­i­cally check to be true or false, in the same way that one may phys­i­cally check whether a cer­tain set of tran­sis­tors cur­rently en­codes some given string of bits. The sec­ond in­put is far more ab­stract, and more com­plex to check, than the first; but I do not think they ex­ist on qual­i­ta­tively differ­ent lev­els. Fi­nally, one would not need in­finite time to check the state­ment “X is be­ing simu­lated”; just pick the func­tion “Given the clap of one’s hands, simu­late X”, and then clap your hands.]


VI.

Four fi­nal notes, to re­cap and con­clude:

  1. My ar­gu­ment in plain English, with­out rigour or rea­son, is this: If hav­ing the right num­bers in the right places is enough to make new peo­ple ex­ist (propo­si­tion A), then any­thing is enough to make any­thing ex­ist (B). It fol­lows that if we ac­cept A, which many thinkers do, then ev­ery­thing—ev­ery pos­si­ble situ­a­tion—cur­rently ex­ists. It is more­over of no con­se­quence to try and add a new situ­a­tion to this ‘set of all pos­si­ble situ­a­tions, in­finite times’, be­cause your new situ­a­tion is already in there an in­finite amount of times, and fur­ther­more, ab­stain­ing from adding this new situ­a­tion counts as ‘any­thing’ and thus, by B, would also add the new situ­a­tion to this set.

  2. You can­not cre­ate a book or a num­ber; you’re merely pro­vid­ing a refer­ence to some already ex­tant book in Ba­bel’s Library, or to some ex­tant num­ber in num­ber space. In the same way, run­ning a simu­la­tion, the vi­tal part of which (by the Church-Tur­ing the­sis) has to be en­tirely based on non-phys­i­cal in­for­ma­tion, should no longer be seen as the act of cre­at­ing some new re­al­ity; it merely opens a win­dow into a re­al­ity that was already there.

  3. The idea that ev­ery pos­si­ble situ­a­tion, in­clud­ing ter­rible, hurt­ful ones, is real, may be very stress­ful. To peo­ple who are both­ered by this, I offer the view that per­haps we do live in the ground level, and simu­lat­ing ar­tifi­cial, non-nat­u­ral con­scious­nesses, may be im­pos­si­ble: Our own world may well be all that there is. The Ca­cophony Hy­poth­e­sis is not suit­able to es­tab­lish that the idea of “re­al­ity is a ca­cophony of simu­la­tions” is nec­es­sar­ily true; rather, it was writ­ten to ar­gue that if and only if we ac­cept that some kind of simu­la­tion is pos­si­ble, then it would be strange to also deny that ev­ery other kind of simu­la­tion is pos­si­ble.

  4. A sec­ondary aim is to re-cen­ter the dis­cus­sion around simu­la­tion: To go from a de­fault idea of “Com­pu­ta­tion is the only method through which simu­la­tion may take place,” to the new idea, which is “Si­mu­la­tions may take place ev­ery­where, in ev­ery way.” The first view seems too neat, too well-suited to an ac­ci­den­tal re­al­ity, strangely and un­rea­son­ably spe­cific; we are en route to dis­cov­er­ing one type of simu­la­tion our­selves, and thus it was de­clared that this was the only type, the only way. The sec­ond view—though my bias should be noted! -- strikes me as be­ing gen­eral and con­sis­tent; it is not formed speci­fi­cally around the ‘nor­mal’, com­puter-in­fluenced ideas of what forms com­pu­ta­tion takes, but rather al­lows for all pos­si­ble forms of com­pu­ta­tion to have a role in this dis­cus­sion.I may well be wrong, but it seems to be that the bur­den of proof should not be on those who say that “X may simu­late Y”; it should be on those who say “X may only be simu­lated by Z.” The de­fault un­der­stand­ing should be that in­puts and func­tions are valid un­til some­how proven in­valid, rather than the other way around. (Truth­fully, to gain a proof ei­ther way is prob­a­bly im­pos­si­ble, un­less we were to some­how find a method to mea­sure con­scious­ness—and this would have to be a method that rec­og­nizes p-zom­bies for what they are.)

Thanks goes to Matthijs Maas for helping me flesh out this idea through en­gag­ing con­ver­sa­tions and thor­ough feed­back.