Three consistent positions for computationalists

Yesterday, as a followup to We are not living in a simulation, I posted Eight questions for computationalists in order to obtain a better idea of what exactly my computationalist critics were arguing. These were the questions I asked:

  1. As it is used in the sentence “consciousness is really just computation”, is computation:
    a) Something that an abstract machine does, as in “No oracle Turing machine can compute a decision to its own halting problem”?
    b) Something that a concrete machine does, as in “My calculator computed 2+2”?
    c) Or, is this distinction nonsensical or irrelevant?

  2. If you answered “a” or “c” to question 1: is there any particular model, or particular class of models, of computation, such as Turing machines, register machines, lambda calculus, etc., that needs to be used in order to explain what makes us conscious? Or, is any Turing-equivalent model equally valid?

  3. If you answered “b” or “c” to question 1: unpack what “the machine computed 2+2” means. What is that saying about the physical state of the machine before, during, and after the computation?

  4. Are you able to make any sense of the concept of “computing red”? If so, what does this mean?

  5. As far as consciousness goes, what matters in a computation: functions, or algorithms? That is, does any computation that give the same outputs for the same inputs feel the same from the inside (this is the “functions” answer), or do the intermediate steps matter (this is the “algorithms” answer)?

  6. Would an axiomatization (as opposed to a complete exposition of the implications of these axioms) of a Theory of Everything that can explain consciousness include definitions of any computational devices, such as “and gate”?

  7. Would an axiomatization of a Theory of Everything that can explain consciousness mention qualia?

  8. Are all computations in some sense conscious, or only certain kinds?

I got some interesting answers to these questions, and from them I can extract three distinct positions that seem consistent to me.

Consistent Position #1: Qualia skepticism

Perplexed asserted this position in no uncertain terms. Here’s my unpacking of it:

“Qualia do not exist. The things that you’re confused about and are mistaking for qualia can be made clear to you using an argument phrased in terms of computation. When you talk about consciousness, I think I can understand your meaning, but you aren’t referring to anything fundamental or particularly well defined: it’s an unnatural category.”

The internal logic of the qualia skeptic’s position makes sense to me, and I can’t really respond to it other than by expressing personal incredulity. To me, the empirical evidence in support of the existence of qualia is so clear and so immediate that I can’t figure out what you’re not seeing so that I can point to it. However, I shouldn’t need to bring you to your senses (literally!) on this in order to convince you to reject Bostrom’s simulation argument, albeit on grounds completely different than any I’ve argued so far. If you don’t buy that there’s anything fundamental behind consciousness, then you also shouldn’t buy Bostrom’s anthropic reasoning in which he conjures up the reference class of “observers with human-type experiences”; elsewhere he refers to “conscious experience” and “subjective experience” without implication that he means anything more specific. That’s taking an unnatural category and invoking it magically. In the statement that we are something selected with uniform probability from that group, how do you make sense of “are”?

Consistent Position #2: Computation is implicit in physics

This position is my best attempt at a synthesis of what TheOtherDave, lessdazed, and prase are getting at. It’s compatible with position #1, but neither one entails the other.

To understand this position, it is helpful, but not necessary, to define the laws of physics in terms of something like a cellular automaton. Each application of the automaton’s update rule can be understood as a primitive operation in a computation. When you apply the update rule repeatedly on cells nearby each other, you’re building up a more complex computation. So, “consciousness is just computation” is equivalent in meaning, essentially, to “consciousness is just physics”.

This position more-or-less necessitates answering “algorithms” to question #5, or if not that then at least something similar to RobinZ’s answer. If you say “functions” then you at least need to explain how to reify the concepts of “input” and “output”. You can pull this off by saying that the update rules are the functions, the inputs are the state before the rule application, and the outputs are the state afterward. Any other answer probably means you’re taking something closer or identical to position #3 which I’ll address next. This comment by peterdjones and his followups to it provide a (Searlesque) intuition pump showing other reasons why a “functions” reply is problematic.

I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation. If your algorithmic primitives are defined at the level of individual update-rule applications, then any change whatsoever to an object’s physical structure is a change to the algorithm that it embodies. If you accept position #2 while rejecting position #1, then you may actually be making the same argument that I am, merely in different vocabulary.

Consistent Position #3: Computation is reified by physics

I was both shocked and pleased to see zaph’s answer to question #6, because it bites a bullet that I never believed anyone would bite: that there is actually something fundamental in the laws of physics which defines and reifies the concept of computation in a substrate-independent fashion. I can’t find any inconsistency in this, but I think we have good reason to consider it extremely implausible. In the language of physics which is familiar to us and has served us well — the language whose vocabulary consists of things like “particle” and “force” and “Hilbert space” — the Kolmogorov complexity of a definition of an equivalence relation which tells us that an AND gate implemented in a MOSFET is equivalent to an AND gate implemented in a neuron is equivalent to an AND gate implemented in desert rocks, but is not equivalent to an OR gate implemented in any of those media — is enormous. Therefore, Solomonoff induction tells us that we should assign vanishingly low probability to such a hypothesis.

I hope that I’ve fairly represented the views of at least a majority of computationalists on LW. If you think there’s another position available, or if you’re one of the people I’ve called out by name and you think I’ve pigeonholed you incorrectly, please explain yourself.