Time, Panpsychism, and Substrate Independence
Lately I’ve come to have some frustration with how people speak of consciousness, and I’ve reached some conclusions that now seem to me somewhat self-evident, but that would also have significant implications. Of course, there is a lot of arrogance in thinking something like this, so it’s good to write it down and leave it open for critique. This text is a relatively quick jotting-down of my thoughts, though (hopefully) it should still be relatively coherent. I’m not really citing anyone else in this, but please don’t assume that my ideas are wholly original.
I’ve heard a lot about panpsychists who think that “electrons” (or what-have-you) have a subjective experience. Whether they do or don’t, I don’t know, but the point I want to make is that it doesn’t matter at all, because even if they do, it doesn’t serve to explain my own subjective experience. That is to say, if an electron is conscious, or if a neuron is conscious, that has no bearing on my own consciousness.[1]
I’m going to make reference to Conway’s Game of Life, a cellular automaton that everyone is probably familiar with — including the fact that it features turing completeness.[2] We can imagine an infinite orderly field of people holding little flags, and looking at the people around them, either raising or lowering their flags according to the rules of the Game of Life. We can imagine, too, that we set-up an initial state that consists of a Turing Machine; actually, we can go further than that, and imagine that this machine is simulating a human brain, down to each individual neuron.
Now, here’s the thing. We know for sure that each person in this field is conscious (or, at least, if you, the reader, are in the field, we know that you’re conscious). But the people in the field have no understanding of what they’re collectively simulating: all they’re doing is raising and lowering a flag according to some set of rules; one has to imagine that they’re all actually pretty bored! There is absolutely no sense in thinking that their subjective experience somehow “transfers up” to that of the brain they’re simulating — and, what’s more, the fact of their consciousness does not seem to help us answer the question of whether the brain they’re simulating is itself conscious one way or the other.
So that would be my objection: if electrons are conscious, I don’t really care, because it doesn’t seem plausible that their consciousness transfers up to me.[3] However, I do think that this argument serves somewhat as an intuition pump for the idea of substrate independence. If a brain that’s actually made up of people doesn’t seem more conscious than a brain made up of neurons, then probably it also doesn’t matter if it’s made up of computer chips. And, if that’s the case, then we could, possibly, be in a simulation (or, at least, you, the reader, could).
But that’s also tricky. There’s this thought experiment (and since I’m just jotting things down I haven’t yet searched for the original)[4] where an alien intelligence simulates the whole world, or maybe just a whole community, in a pretty large computer — let’s call it Sim#1.[5] Let’s say the simulation is entirely deterministic, and let’s say that this intelligence records the all of the thoughts, feelings and reactions of one particular person in Sim#1, Mary.
Now, here’s the kicker: the alien turns off the simulation, and then runs it again; except, this time, instead of “wasting compute” in simulating Mary, it inserts the recording of her into the new simulation. Everyone in the simulation is none-the-wiser — the Mary they see reacts and talks and behaves exactly like before, and perfectly consistently with the behavior of each simulated person. The obvious question then arises: is Sim#2 Mary conscious?
If substrate independence is true, we have no problem saying that Sim#1 Mary was conscious, and that everyone else is conscious in both Sim#1 and Sim#2. But, if we say that Sim#2 Mary is not conscious… then we have to grapple with the fact that she is a P-zombie.[6] And I don’t need to reinvent the wheel here, so I’ll just claim that belief in P-zombies is incoherent, and we don’t really have a good reason to say that she isn’t conscious. So Sim#2 Mary, a mere recording of Mary, must be cons– wait, what?
I think this is folly. I think we’re engaging in a category error if we’re thinking of things this way — we’re not fully grappling with the consequences of substrate independence. Are the people in Sim#1 and Sim#2 conscious twice, like some kind of deja-vu they can’t experience? I really don’t think so.
Is there such a thing as what it’s like to be Mary? Yes. There is such a thing, and it doesn’t matter if it’s Sim#1 Mary or Sim#2 Mary; there is only one Mary. She is not conscious during a particular simulation, and she doesn’t die if you turn it off. She can be killed in the “dream”, but not in “real life”. Her continuous experience of a now is a result of her temporal existence, not a result of an external clock of the universe. She is not conscious when the computer first crunched the numbers to “make up” her consciousness, she just is — which is about as obvious as saying that you were conscious yesterday.
I will get back to this, and to some more intuition pumps to help us get going; otherwise I think it’s very easy to object. But before that, I want to do a brief aside on LLMs.
There’s been a few (somewhat) recent pieces on here about thinking of LLMs (meaning, the neural networks produced by the Transformer Architecture and pruning) as Simulators and the chatbots we talk to as Simulations. This terminology is incredibly helpful, and the view that it expresses is one I had also arrived at. For the purposes of my little essay here, what this tells us is that LLMs and their chatbots also have two consciousnesses: one for the Simulator (the Spider) and one for the Simulation (the Actor).[7]
More to the point, there is a version of panpsychism, advanced by the likes of David Chalmers, that says that something as simple as a thermostat might have a subjective experience — a very rudimentary one, but an experience nonetheless.[8] It can perhaps be phrased as something like “any information processed in a meaningful way pre-supposes a ‘meaner’ who experiences it.” If this weak panpsychism is correct, then obviously the Spider is conscious! And it’s a bizarre form of consciousness, characterized exclusively by a — in the ideal case — complete understanding of language (and thus of the world it’s embedded in) in the form of a probability distribution over the space of all tokens, given an input sequence.
The Actor, on the other hand, has the same kind of consciousness as Mary, or you, or me, except that, of course, it only ever experiences text. You might ask “well how can that be?”, but if we imagined that the LLM had been trained to predict your every thought and utterance (were you to be subjected to some kind of sensory deprivation that forced you to only experience and communicate via text), and that it could do that with perfect exactitude, then we have to conclude that it would be simulating you. But does the argument have to be that strong? If it works for the perfect replica case, does it really stop working if the replica is imperfect? That sounds implausible. So it seems that the Actor should have a subjective experience — even though all of the information for what it does and thinks is already contained in the Spider. If the Actor is conscious, it seems to be a little bit like Sim#2 Mary.
Okay, so where am I getting at? Am I going to start talking about AI welfare? Not really.[9] Let’s go back to Conway’s Game of Life. I’m going to make this bold claim: it doesn’t matter whether you keep running the cellular automaton; the steps that it will run through are “already there”, they’re a mathematical given. In the exact same way that there is an answer to (a) ⟨the numerical value of 3 pentated to 3⟩, in the exact same way that there is an answer to (b) ⟨the gogolth digit of pi in base 12⟩, there is an answer to (c) ⟨the state of the infinite cellular grid at any time step, for the initial configuration where the automaton builds a Turing Machine that simulates the entirety of a particular human brain⟩, even if we haven’t calculated them; even if it’s impossible to do so.
We say X is conscious if and only if there is such a thing as ⟨what it’s like to be X⟩. If when we run the automaton, we have reason to think that there is such a thing as what it’s like to be the simulated brain, but we also conclude that it shouldn’t matter whether or not you run the automaton… Doesn’t that make it seem that all consciousness is putative, even abstract? Wouldn’t it be the case, then, that all conceivable conscious states exist, simply because it’s possible to conceive of them?
I think so.
I’ve come to a nearly delusional form of belief (it’s not like I’m exactly convinced), that isn’t even fully articulated here; I’ve come to really think that this whole thing is quite bogus, that there really is no difference between realism and solipsism and nihilism and a strange kind of theism. There is something rather than nothing because there could be. The truth of my subjective experience right here right now is as solid as the truth that the internal angles of an equilateral triangle in Euclidian space sum up to 180°. It’s a mathematical necessity.[10]
- ^
I’m going to be using the terms “subjective experience” and “consciousness” somewhat interchangeably, which might ruffle a few feathers. The more strict definition that I like is that “subjective experience” refers to the raw experiencing of things, maybe expressed as being an observer, or having a point-of-view; whereas “consciousness” implies some degree of self-awareness, at the very least an understanding (or a delusion) that there is a self.
- ^
Conway’s Game of Life consists of a 2D grid of cells, each of which can be ON or OFF, and a set of rules for what a cell’s state should be in the next time-step. The cell’s next state depends only on the state of neighbouring cells in the current time-step; if too few neighbors are ON, the cell is lonely, and it dies; if too many are ON, the cell is overcrowded, and it also dies; if the sweet spot number if neighbors is on, a dead cell comes alive. The grid can be idealized to be infinite. The game produces many self-replicating patterns, which can be used to perform actions, and even build a Turing Machine.
- ^
One can, of course, think of some objections. Maybe the process I described, with the people and the flags, becomes a kind of “consciousness bottleneck” that prevents the consciousness of the individuals from transferring up. And, maybe, the same isn’t true for electrons, or it is for electrons, but not for neurons. I don’t really care to play that game; if you want to believe in Reaganomics for the mind, I can’t stop you.
(Parenthetical on a footnote: I do think this is quite unlikely. After all, the activity of neurons in a brain is relatively simple; not that different from people waving flags. Of course, it’s not that simple, and simulations of brain areas used in computational neuroscience often make simplifications for the sake of feasibility — and yet, simplified models still capture the essence of brain function. Point being that a lot of the complexity we see is in some sense superfluous, more attributable to artifacts of “blind watchmaker” design than to functional requirements. And if we can translate the behavior of neurons to relatively simple operations that people holding little flags could emulate, then the argument still works as-is.)
- ^
I’ve since scoured the web in search of the originator, and I can’t find it. It’s possible it was in a YouTube video, or even a comment, or a tweet, or something like it. It’s also possible that it’s an original idea! There are of course similar thought experiments: “Blockhead” (gigantic look-up table), and it’s possible to find some pieces on “playing back” simulations, even on here.
If you think you’ve heard this idea before, and you can remember where, I’d love to know.
- ^
And, for the sake of argument, let’s say the simulation is purely classical. If anyone has an objection to this, let me remind that person that there ought to be a version of you somewhere in the wave function that actually agrees with me!
- ^
I’m a little too woke to be using the word “zombie” without scruples, but not woke enough to find an alternative. So I’ll just make mention of the fact that the term has been misappropriated from Haitian creole and the Vodou religion.
The philosophers of the so-called Enlightenment (the intellectual tradition whose footsteps I’m following in this work, in many ways) were by-and-large willful participants (and benefactors) in the genocide of the indigenous peoples of so-called Santo Domingo, and the continued enslavement and subjugation of the island’s Black population. It’s not right that we get to just misappropriate their religious terminology without recognition of this fact. A lot more could be said about this, of course.
- ^
My usage of the term Spider for the Simulator seeks to evoke the idea of an intelligence that is somehow alien to us. It’s in the same spirit as the analogy in this piece in Tim Urban’s Wait But Why.
For the Simulation, I say Actor and not Character because I don’t want to overplay my hand, but the distinction shouldn’t matter much at the end of the day. After all, don’t the best actors get lost in their own performances? It stands to reason that a perfect actor is someone who delusionally believes herself to be her character.
- ^
I’m quite amenable to this view, because I feel that it scales nicely: add more richness to the experience, and it goes from being a simple awareness of a single number, to a whole manifold of sense impressions, and a self-conception.
- ^
To be honest, I’m not really sure how to address that question.
- ^
I’ve since found that this idea bears a significant resemblance to Max Tegmark’s mathematical universe hypothesis.
This is pretty close to the dust theory of Greg Egan’s Permutation City and also similar in most ways to Tegmark’s universe ensemble.
She is not exactly a p zombie. The Mary in sim #1 is not a p-zombie of the original Mary, because she is only a functional duplicate, not a physical duplicate; and the Mary in Sim #2 is only a behavioural duplicate. So the question of “what difference explains the loss of consciousness” is easily answered—all three are differerent.
P zombies aren’t incoherent , they just contradict physicalism. And you are talking about c zombies, anyway.
Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That’s the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn’t imply computationalism, and arguments against p-zombies don’t imply the non existence of c-zombies—unconscious duplicates that are identical computationally, but not physically.
There’s no strong reason to think they are conscious once.
Something gets lost at each stage. Going from a physical embodiment to a computational simulation loses the physics; going from a computational simulation to a behavioural simulation loses the counterfactual possibilities of the computational simulation; going from a behavioural simulation that actually runs to a notional one loses actual occurrence. Any of those losses could affect consciousness.
That should be taken as a reductio as absurdum of the GAZP
Clearly , the answer isn’t “indefinitely” .
@JBlack The problem with Dust theory is that it assumes that conscious states supervene on brain states instantaneously. There is no evidence for that. We should not be fooled by the “specious present”. We seem to be conscious moment-by-moment, but the “moments” in question are rather coarse-grained, corresponding to the specious present of 0.025-0.25 second or so. It’s quite compatible with the phenomenology that it requires thousands or millions of neural events or processing steps to achieve a subjective “instant” of consciousness. Which would mean you can’t salami-slice someone’s stream-of-consciousness too much without it vanishing: and also mean that spontaneously occurring Boltzman states are conscious; and also preserves the intuition that computation is a process —that a computational state is defined as being a stage of a computation.
This is probably not very ideal for a first post, but I couldn’t think of a better platform for a “ramble” like this than Less Wrong. I’ve listened to and read much on these subjects, but I’ve never found anyone expressing this precise idea, so I felt that it at least met some degree of originality.
I’m open to feedback on how to improve this to a publishable state; I thought that it might be too long for a “quick take”, but I could be wrong.
In the interest of defending my “credentials,” you might be interested in taking a cursory look at things I’ve written before. I have a piece about AI welfare (which I submitted as the final essay in my neuroscience course), as well as a piece on gender identity (which I wrote for my former institution’s student newspaper). Both of these are quite dated, but they at least serve to show that I can write academically.
In my opinion it’s pretty good, especially for a first post on a philosophical topic.