The conscious tape

This post comprises one question and no answers. You have been warned.

I was reading “How minds can be computational systems”, by William Rapaport, and something caught my attention. He wrote,

Computationalism is—or ought to be—the thesis that cognition is computable … Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). … To say that cognition is computable is to say that there is an algorithm—more likely, a collection of interrelated algorithms—that computes it. So, what does it mean to say that something ‘computes cognition’? … cognition is computable if and only if there is an algorithm … that computes this function (or functions).

Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you’re talking about “cognition”, it’s just a choice between two different ways to define cognition.

When it comes to consciousness, I consider myself a computationalist. But I hadn’t realized before that my explanation of consciousness as computational “works” by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.

Option 1: Consciousness is computed

If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn’t matter what algorithm you use to get that output, or what physical machinery you use to compute it. In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm. In humans today, the output is not produced all at once—but from a computationalist perspective, that isn’t important. I know “emergence” is wonderful, but it’s still Turing-computable. Whatever a “correct” sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.

So what is conscious, in this view? Well, the algorithm doesn’t matter—remember, we’re not asking for O(consciousness); we’re saying that consciousness is computed, and therefore is the output of a computation. The machine doing the computing is one step further removed than the algorithm, so it’s certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.

Whatever it is that’s conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious. Or the information on the tape, or—whatever it is that’s conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn’t matter. Time doesn’t enter into it.

The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it’s converted into a static representation, even if the two representations contain exactly the same information. (X and Y have the same information if an observer can translate X into Y, and Y into X. The requirement for an observer may be problematic here.) This strikes me as not being computationalist at all. Computationalism means considering two computational outputs equivalent if they contain the same information, whether they’re computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels. Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons? I don’t think so.

Option 2: Consciousness is computation

If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we’re not computationalists anymore!

A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not. If it’s not output, or internal representational state, it doesn’t count. There are no other “by-products of computation”. If you use a context-sensitive grammar to match a regular expression, it doesn’t make the answer more special than if you used a regular grammar.

Don’t protest that a human talks and walks and thereby produces side-effects during the computation. That is not a computational analysis. A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine. Anything that gives a different result is not a computational analysis. If these side-effects don’t show up on the tape, it’s because you forgot to represent them.

An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally. I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat. Or it could be a complexity or runtime analysis, that cared about how long it took. A complexity analysis has a categorical output; there’s no such thing as a function being “a little bit recursively enumerable”, as I believe there is with consciousness. So I’d be surprised if “conscious” is a property of an algorithm in the same way that “recursively enumerable” is. A runtime analysis can give more quantitative answers, but I’m pretty sure you can’t become conscious by increasing your runtime. (Otherwise, Windows Vista would be conscious.)

Option 3: Consciousness is the result of quantum effects in microtubules

Just kidding. Option 3 is left as an exercise for the reader, because I’m stuck. I think a promising angle to pursue would be the necessity of an external observer to interpret the “conscious tape”. Perhaps a conscious computational device is one that observes itself and provides its own semantics. I don’t understand how any process can do that; but a static representation clearly can’t.

ADDED

Many people are replying by saying, “Obviously, option 2 is correct,” then listing arguments for, without addressing the problems with option 2. That’s cheating.