The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence
Epistemic Status: The following seems plausible to me, but it’s complex enough that I might have made some mistakes. Moreover, it goes against the beliefs of many people much smarter than myself. Thus caution is advised, and commentary is appreciated.
In this post, I aim to make a philosophical argument that we (or anyone) cannot use simulation to create new consciousnesses (or, for that matter, to copy existing people’s consciousnesses so as to give them simulated pleasure or pain). I here make a distinction between “something that acts like it is conscious,” (e.g. what is commonly known as a ‘p-zombie’) and “something that experiences qualia.” Only the latter is relevant to what I mean when I say something is ‘conscious’ throughout this post. In other words, consciousness here refers to the quality of ‘having the lights on inside’, and as a result it relates as well to whether or not an entity is a moral patient (i.e. can it feel pain? Can it feel pleasure? If so, it is important that we treat it right).
If my argument holds, then this would be a so-named ‘crucial consideration’ to those who are concerned about simulation. It would mean that no one can make the threat of hurting us in some simulation, nor can one promise to reward us in such a virtual space. However, we ourselves might still exist in some higher world’s simulation (in a manner similar to what is described in SlateStarCodex’s ‘The View from the Ground Level’). Finally, since one consequence of my conclusion is that there is no moral downside to simulating beings that suffer, one might prefer to level a Pascal’s Wager-like argument against me and say that under conditions of empirical and moral uncertainty, the moral consequences of accepting this argument (i.e. treating simulated minds as not capable of suffering) would be extreme, whereas granting simulated minds too much respect has fewer downsides.
Without further ado...
Let us first distinguish two possible worlds. In the first, simulating consciousnesses [in any non-natural state] is simply impossible. That is to say, the only level on which consciousnesses may exist, is the real, physical level that we see around us. No other realms may be said to ‘exist’; all other spaces are mere information—they are fiction, not real. Nature may have the power to create consciousnesses, but not us: No matter how hard we try, we are forever unable to instantiate artificial consciousnesses. If this is the world we live in, then the Cacophony hypotheses is already counterfactually proven.
So let us say that we live in the second type of world: One where consciousnesses may exist not merely in what is directly physical, but may be instantiated also in the realm of information. Ones and zeroes by themselves are just numbers, but if you represent them with transistors and interpret them with the right rules, then you will find that they define code, programs, models, simulations—until, finally, the level of detail (or complexity, or whatever is required) is so high that consciousnesses are being simulated.
In this world, what is the right substrate (or input) on which this simulation may take place? And what are the rules by which it may be calculated?
Some hold that the substrate is mechanical: ones and zeroes, embedded on copper, lead, silicon, and gold. But the Church-Turing thesis tells us that all sufficiently advanced computers are equally powerful. What may be simulated on ones and zeroes, may be simulated as well by combinations of colours, or gestures, or anything that has some manner of informational content. The effects—that is, the computations that are performed—would remain the same. The substrate may be paint, or people, or truly anything in the world, so long as it is interpreted in the right way. (See also Max Tegmark’s explanation of this idea, which he calls Substrate-Independence.)
And whatever makes a simulation run—the functions that take such inputs, and turn them into alternate simulated realities where consciousnesses may reside—who says that the only way this could happen is by interpreting a string of bits in the exact way that a computer would interpret it? How small is the chance that out of all infinite possible functions, the only function that actually works is exactly that function that we’ve arbitrarily chosen to apply to computers, and which we commonly accept as having the potential for success?
There are innumerably many interpretations of a changing string of ones and zeroes, of red and blue, of gasps and sighs. Computers have one consistent ruleset which tells them how to interpret bits; we may call this ruleset, ‘R’. However, surely we might have chosen many other rulesets. Simple ones, like “11 means 1 and 00 means 0, and interpret the result of this with R” are (by the Church-Turing thesis) equally powerful insofar as their ability to eventually create consciousnesses goes. Slightly more complex ones, such as “0 means 101 and 1 means 011, and interpret the result of this with R” may also be consistent, provided that we unpack the input in this manner. And we need not limit ourselves to rulesets that make use of R: Any consistent ruleset, no matter how complex, may apply. What about the rule, “1 simulates the entirety of Alice, who is now a real simulated person”? Is this a valid function? Is there any point at which increasing the complexity of an interpretation rule, given some input, makes it lose the power to simulate? Or may anything that a vast computer network can simulate, be encoded into a single bit and unpacked from this, provided that we read it with the right interpretation function? Yes ---- of course that is the case: All complexity that may be contained in some input data ‘X’, may instead be off-loaded into a function which says “Given any bit of information, I return that data ‘X’.”
We are thus led to an inexorable conclusion:
Every possible combination of absolutely anything that exists, is valid input.
Any set of functions is a valid set of functions—and the mathematical information space of all possible sets of functions, is vast indeed.
As such, an infinite number of simulations of all kinds are happening constantly, all around us. After all, if one function (R) can take one type of input (ones and zeroes, encoded on transistors) and return a simulation-reality, then who is to say that not for all inputs there exist infinitely many functions that can operate on it to this same effect?
Under this view, the world is a cacophony of simulations, of realities all existing in information space, invisible to our eyes until they we may access them through functional interpretation methods.
This leads us to the next question: What does it mean for someone to run a simulation, now?
In Borges’ short story, “The Library of Babel,” there exists a library containing every book that could ever be: It is a physical representation of the vast information space that is all combinations of letters, punctuation marks, and special characters. It is now nonsensical to say that a writer creates a book: The book has always existed, and the writer merely gives us a reference to some location within this library at which the book may be found.
In the same way, all simulations already exist. Simulations are after all just certain configurations of information, interpreted in certain informational ways—and all information already exists, in the same realm that e.g. numbers (which are themselves information) inhabit. One does not create a simulation; one merely gives a reference to some simulation in information space. The idea of creating a new simulation is as nonsensical as the idea of creating a new book, or a new number; all these structures of information already exist; you cannot create them, only reference them.
But could not consciousnesses, like books, be copied? Here we run into the classical problem of whether there can exist multiple instances of a single informational object. If there may not be, and all copies of a consciousness are merely pointers to a single ‘real’ consciousness, in the same way that all copies of a book may be understood to be pointers to a single ‘real’ book, then this is not a problem. We then would end up with the conclusion that any kind of simulation is powerless: Whether you simulate some consciousness or not, it (and indeed everything!) is already being simulated.
So suppose instead that multiple real, valid copies of a consciousness may exist. That is to say: the difference between there being one copy of Bob, and there being ten copies of Bob, is that in the latter situation, there exists more pain and joy—namely that which the simulated Bobs are feeling—than there is in the former situation. Could we then not still conclude that running simulations creates consciousnesses, and thus the act of running a simulation is one that has moral weight?
To refute this, a thought experiment. Suppose that a malicious AI shows you that it is running a simulation of you, and threatens to hurt sim!you if you don’t do X. What power does it now have over you? What differences are there between the situation where it hurts sim!you, and the one where it rewards sim!you?
The AI is using one stream of data and interpreting it in one way (probably with ruleset R); this combination of input and processing rules results in a simulation of ‘you’. In particular, because it has access to both the input and the interpretation function, it can view the simulation and show it to you. But on that same input there acts, invisibly to us, another set of rules (specified here out of infinitely many sets of rules, all of which are simultaneously acting on this input), which results in a slightly different simulation of you. This second set of rules is different in such a way, that if the AI hurts sim!you (an act which, one should note, changes the input; ruleset R remains the same), then in the second simulation, based on this input, you are rewarded, and vice versa. Now there are two simulations ongoing, both real and inhabited by a simulated version of you, both running on a single set of transistors. The AI cannot change that in one of these two simulations, you are hurt, and in the other, you are not; it can only change which one it chooses to show you.
Indeed: For every function which simulates, on some input, a consciousness that is suffering, there is another function which, on this same input, simulates that same consciousness experiencing pleasure. Or, more generally and more formally stated: Whenever the AI decides to simulate X, then for any other possible consciousness or situation Y that is not X, there exists a function which takes the input of “The AI is simulating X”, and which subsequently simulates Y. (Incidentally, the function which takes this same input, and which then returns a simulation of X, is exactly that function that we usually understand to be ‘simulation’, namely R. However, as noted, R is just one out of infinitely many functions.)
As such, in this second world, reality is currently running uncountable billions of copies of any simulation that one may come up with, and any attempt to add one simulation-copy to reality, results instead in a new reality-state in which every simulation-copy has been added. Do not fret, you are not culpable: after all, any attempt to do anything other than adding a simulation-copy, also results in this same new reality-state. This is because any possible input, when given to the set of all possible rules or functions, yields every possible result; thus it does not matter what input you give to reality, whether that is running simulation X, or running simulation Y, or even doing act Z, or not doing act Z.
Informational space is infinite. Even if we limit our physical substrate to transistors set to ones or zeroes, we may still come up with limitless functions besides R, that together achieve this above result. In running computations, we don’t change what is being simulated, we don’t change what ‘exists’. We merely open a window onto some piece of information. In mathematical space, everything already exists. We are not actors, but observers: We do not create numbers, or functions, or even applications of functions on numbers; we merely calculate, and view the results.
If simulation is possible on some substrate with some rule, then it is possible on any substrate with any rule. Moreover, simulation space, like Borges’ Library and number space, exist as much as they’re ever going to exist; all possible simulations are already extant and running.
Attempting to run ‘extra’ simulations on top of what reality is already simulating, is useless, because your act of simulating X is interpreted by reality as input on which it simulates X and everything else, and your act of not simulating X, is also interpreted by reality as input on which it simulates X and everything else.
It should be noted that simulations are still useful, in the same way that doing any kind of maths is useful: Amidst the infinite expanses of possible outputs, mathematical processes highlight those outputs which you are interested in. There are infinitely many numbers, but the right function with the right input can still give you concrete information. In the same way, if someone is simulating your mind, then even though they cannot cause any pain or reward that would not already ‘exist’ anyway, they can now nonetheless read your mind, and from this gain much information about you.
Thus simulation is still a very powerful tool.
But the idea that simulation can be used to conjure new consciousnesses into existence, seems to me to be based on a fundamental misunderstanding of what information is.
[A note of clarification: One might argue that my argument does not successfully make the jump from physically-defined inputs, such as a set of transistors representing ones and zeroes, to symbolically-defined meta-physical inputs, such as “whether or not X is being simulated.” This would be a pertinent argument, since my line of reasoning depends crucially on this second type of input. To this hypothetical argument, I would counter that any such symbolic input has to exist fully in natural, physical reality in some manner: “X is being simulated” is a statement about the world which we might, given the tools (and knowing for each function what input to search for—this is technically computable), physically check to be true or false, in the same way that one may physically check whether a certain set of transistors currently encodes some given string of bits. The second input is far more abstract, and more complex to check, than the first; but I do not think they exist on qualitatively different levels. Finally, one would not need infinite time to check the statement “X is being simulated”; just pick the function “Given the clap of one’s hands, simulate X”, and then clap your hands.]
Four final notes, to recap and conclude:
My argument in plain English, without rigour or reason, is this: If having the right numbers in the right places is enough to make new people exist (proposition A), then anything is enough to make anything exist (B). It follows that if we accept A, which many thinkers do, then everything—every possible situation—currently exists. It is moreover of no consequence to try and add a new situation to this ‘set of all possible situations, infinite times’, because your new situation is already in there an infinite amount of times, and furthermore, abstaining from adding this new situation counts as ‘anything’ and thus, by B, would also add the new situation to this set.
You cannot create a book or a number; you’re merely providing a reference to some already extant book in Babel’s Library, or to some extant number in number space. In the same way, running a simulation, the vital part of which (by the Church-Turing thesis) has to be entirely based on non-physical information, should no longer be seen as the act of creating some new reality; it merely opens a window into a reality that was already there.
The idea that every possible situation, including terrible, hurtful ones, is real, may be very stressful. To people who are bothered by this, I offer the view that perhaps we do live in the ground level, and simulating artificial, non-natural consciousnesses, may be impossible: Our own world may well be all that there is. The Cacophony Hypothesis is not suitable to establish that the idea of “reality is a cacophony of simulations” is necessarily true; rather, it was written to argue that if and only if we accept that some kind of simulation is possible, then it would be strange to also deny that every other kind of simulation is possible.
A secondary aim is to re-center the discussion around simulation: To go from a default idea of “Computation is the only method through which simulation may take place,” to the new idea, which is “Simulations may take place everywhere, in every way.” The first view seems too neat, too well-suited to an accidental reality, strangely and unreasonably specific; we are en route to discovering one type of simulation ourselves, and thus it was declared that this was the only type, the only way. The second view—though my bias should be noted! -- strikes me as being general and consistent; it is not formed specifically around the ‘normal’, computer-influenced ideas of what forms computation takes, but rather allows for all possible forms of computation to have a role in this discussion.I may well be wrong, but it seems to be that the burden of proof should not be on those who say that “X may simulate Y”; it should be on those who say “X may only be simulated by Z.” The default understanding should be that inputs and functions are valid until somehow proven invalid, rather than the other way around. (Truthfully, to gain a proof either way is probably impossible, unless we were to somehow find a method to measure consciousness—and this would have to be a method that recognizes p-zombies for what they are.)
Thanks goes to Matthijs Maas for helping me flesh out this idea through engaging conversations and thorough feedback.