Chalmers’s arguments are correct, and they can be generalized to only having to preserve inputs and outputs.
For example, we can imagine replacing each neuron by an LLM on one side of the spectrum, the entire brain on the other side of the spectrum, and an arbitrarily high number of intermediate states in between, where in each step, the brain is replaced by a collection of progressively larger LLMs. At what exact point will consciousness disappear?
We can also, among other things, use a probability argument: What people mean by consciousness can’t depend on the computational details in which our mind is implemented, because those details don’t impact our thoughts and reasoning (whereas qualia do). It would be probabilistically unjustified to conclude our mind has to be implemented using specific computations to have what we mean by qualia, because there is no way for that extra information to find its way into our reasoning processes in a non-arbitrary way. And so, to assume (or conclude) specific computations are necessary would be like believing that a conscious mind has to be implemented in human neural tissue.
Also, consider evolution: what computations implement our thoughts, beliefs and behavior is largely an evolutionary accident, e.g. octopuses or aliens would use very different computations. Does that mean only humans (and primates that are sufficiently evolutionarily close) have consciousness? That’s extremely implausible. Also, what would be the chance that evolution got all those computations right in humans? For that to happen would be unimaginably unlikely.
Chalmers’s arguments are correct, and they can be generalized to only having to preserve inputs and outputs.
For example, we can imagine replacing each neuron by an LLM on one side of the spectrum, the entire brain on the other side of the spectrum, and an arbitrarily high number of intermediate states in between, where in each step, the brain is replaced by a collection of progressively larger LLMs. At what exact point will consciousness disappear?
We can also, among other things, use a probability argument: What people mean by consciousness can’t depend on the computational details in which our mind is implemented, because those details don’t impact our thoughts and reasoning (whereas qualia do). It would be probabilistically unjustified to conclude our mind has to be implemented using specific computations to have what we mean by qualia, because there is no way for that extra information to find its way into our reasoning processes in a non-arbitrary way. And so, to assume (or conclude) specific computations are necessary would be like believing that a conscious mind has to be implemented in human neural tissue.
Also, consider evolution: what computations implement our thoughts, beliefs and behavior is largely an evolutionary accident, e.g. octopuses or aliens would use very different computations. Does that mean only humans (and primates that are sufficiently evolutionarily close) have consciousness? That’s extremely implausible. Also, what would be the chance that evolution got all those computations right in humans? For that to happen would be unimaginably unlikely.