The dominant philosophical stance among naturalists and rationalists is some form of computational functionalism—the view that mental states, including consciousness, are fundamentally about what a system does rather than what it’s made of. Under this view, consciousness emerges from the functional organization of a system, not from any special physical substance or property.
A lot of people say this, but I’m pretty confident that it’s false. In Why it’s so hard to talk about Consciousness, I wrote this on functionalism (… where camp #1 and #2 roughly correspond to being illusionists vs. realists on consicousness; that’s the short explanation, the longer one is, well, in the post! …):
Functionalist can mean “I am a Camp #2 person and additionally believe that a functional description (whatever that means exactly) is sufficient to determine any system’s consciousness” or “I am a Camp #1 person who takes it as reasonable enough to describe consciousness as a functional property”. I would nominate this as the most problematic term since it is almost always assumed to have a single meaning while actually describing two mutually incompatible sets of beliefs.[3] I recommend saying “realist functionalism” if you’re in Camp #2, and just not using the term if you’re in Camp #1.
As far as I can tell, the majority view on LW (though not by much, but I’d guess it’s above 50%) is just Camp #1/illusionism. Now these people describe their view as functionalism sometimes, which makes it very understandable why you’ve reached that conclusion.[1] But this type of functionalism is completely different from the type that you are writing about in this article. They are mutually imcompatible views with entirely different moral implications.
Camp #2 style functionalism is not a fringe view on LW, but it’s not a majority. If I had to guess, just pulling a number out of my hat here, perhaps a quarter of people here believe this.
The main alternative to functionalism in naturalistic frameworks is biological essentialism—the view that consciousness requires biological implementation. This position faces serious challenges from a rationalist perspective:
Again, it’s understandable that you think this, and you’re not the first. But this is really not the case. The main alternative to functionalism is illusionism (which like I said, is probably a small majority view on LW, but in any case hovers close to 50%). But even if we ignore that and only talk about realist people, biological essentialism wouldn’t be the next most popular view. I doubt that even 5% of people on the platform believe anything like this.
There are reasons to reject AI consciousness other than saying that biology is special. My go-to example here is always Integrated Information Theory (IIT) because it’s still the most popular realist theory in the literature. IIT doesn’t have anything about biological essentialism in its formalism, it’s in fact a functionalist theory (at least with how I define the term), and yet it implies that digital computers aren’t conscious. IIT is also highly unpopular on LW and I personally agree that’s it’s completely wrong, but it nonetheless makes the point that biological essentialism is not required to reject digital-computer-consciousness. In fact, rejecting functionlism is not required for rejecting digital-computer consciousness.
This is completely unscientific and just based on my gut so don’t take it too seriously, but here would be my honest off-the-cuff attempt at drawing a Venn diagram of the opinion spread on LessWrong with size of circles representing proportion of views
Relatedly, EuanMcLean just wrote this sequence against functionalism assuming that this was what everyone believed, only to realize halfway through that the majority view is actually something else.
Your original post on the Camp #1/Camp #2 distinction is excellent, thanks for linking (I wish I’d read it before making this post!)
I realise now that I’m arguing from a Camp #2 perspective. Hopefully it at least holds up for the Camp #2 crowd. I probably should have used some weaker language in the original post instead of asserting that “this is the dominant position” if it’s actually only around ~25%.
As far as I can tell, the majority view on LW (though not by much, but I’d guess it’s above 50%) is just Camp #1/illusionism. Now these people describe their view as functionalism sometimes, which makes it very understandable why you’ve reached that conclusion.[1] But this type of functionalism is completely different from the type that you are writing about in this article. They are mutually imcompatible views with entirely different moral implications.
Genuinely curious here, what are the moral implications of Camp #1/illusionism for AI systems? Are there any? If consciousness is ‘just’ a pattern of information processing that leads systems to make claims about having experiences (rather than being some real property systems can have), would AI systems implementing similar patterns deserve moral consideration? Even if both human and AI consciousness are ‘illusions’ in some sense, we still seem to care about human wellbeing—so should we extend similar consideration to AI systems that process information in analogous ways? Interested in how illusionists think about this (not sure if you identify with Illusionism but it seems like you’re aware of the general position and would be a knowledgeable person to ask.)
There are reasons to reject AI consciousness other than saying that biology is special. My go-to example here is always Integrated Information Theory (IIT) because it’s still the most popular realist theory in the literature. IIT doesn’t have anything about biological essentialism in its formalism, it’s in fact a functionalist theory (at least with how I define the term), and yet it implies that digital computers aren’t conscious.
Again, genuine question. I’ve often heard that IIT implies digital computers are not conscious because a feedforward network necessarily has zero phi (there’s no integration of information because the weights are not being updated.) Question is, isn’t this only true during inference (i.e. when we’re talking to the model?) During its training the model would be integrating a large amount of information to update its weights so would have a large phi.
Again, genuine question. I’ve often heard that IIT implies digital computers are not conscious because a feedforward network necessarily has zero phi (there’s no integration of information because the weights are not being updated.) Question is, isn’t this only true during inference (i.e. when we’re talking to the model?) During its training the model would be integrating a large amount of information to update its weights so would have a large phi.
(responding to this one first because it’s easier to answer)
You’re right on with feed-forward networks having zero Φ, but > this is actually not the reason why digital Von Neumann[1] computers can’t be conscious under IIT. The reason as by Tononi himself is that
[...] Of course, the physical computer that is running the simulation is just as real as the brain. However, according to the principles of IIT, one should analyse its real physical components—identify elements, say transistors, define their cause–effect repertoires, find concepts, complexes and determine the spatio-temporal scale at which Φ reaches a maximum. In that case, we suspect that the computer would likely not form a large complex of high Φmax , but break down into many mini-complexes of low Φmax max . This is due to the small fan-in and fan-out of digital circuitry (figure 5c), which is likely to yield maximum cause–effect power at the fast temporal scale of the computer clock.
So in other words, the brain has many different, concurrently active elements—the neurons—so the analysis based on IIT gives this rich computational graph where they are all working together. The same would presumably be true for a computer with neuromorphic hardware, even if it’s digital. But in the Von-Neumann architecture, there are these few physical components who handle all these logically separate things in rapid succession.
Another potentially relevant lens is that, in the Von-Neumann architecture, in some sense the only “active” components are the computer clocks, whereas even the CPUs and GPUs are ultimately just “passive” components that process inputs signals. Like the CPU gets fed the 1-0-1-0-1 clock signal plus the signals representing processor instructions and the signals representing data and then processes them. I think that would be another point that one could care about even under a functionalist lens.
Genuinely curious here, what are the moral implications of Camp #1/illusionism for AI systems?
I think there is no consensus on this question. One position I’ve seen articulated is essentially “consciousness is not a crisp category but it’s the source of value anyway”
I think consciousness will end up looking something like ‘piston steam engine’, if we’d evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.
Piston steam engines aren’t a 100% crisp natural kind; there are other machines that are pretty similar to them; there are many different ways to build a piston steam engine; and, sure, in a world where our core evolved values were tied up with piston steam engines, it could shake out that we care at least a little about certain states of thermostats, rocks, hand gliders, trombones, and any number of other random things as a result of very distant analogical resemblances to piston steam engines.
But it’s still the case that a piston steam engine is a relatively specific (albeit not atomically or logically precise) machine; and it requires a bunch of parts to work in specific ways; and there isn’t an unbroken continuum from ‘rock’ to ‘piston steam engine’, rather there are sharp (though not atomically sharp) jumps when you get to thresholds that make the machine work at all.
Another position I’ve seen is “value is actually about something other than consciousness”. Dennett also says this, but I’ve seen it on LessWrong as well (several times iirc, but don’t remember any specific one).
And a third position I’ve seen articulated once is “consciousness is the source of all value, but since it doesn’t exist, that means there is no value (although I’m still going to live as though there is)”. (A prominent LW person articulated this view to me but it was in PMs and idk if they’d be cool with making it public, so I won’t say who it was.)
The IIT paper which you linked is very interesting—I hadn’t previously internalised the difference between “large groups of neurons activating concurrently” and “small physical components handling things in rapid succession”. I’m not sure whether the difference actually matters for consciousness or whether it’s a curious artifact of IIT but it’s interesting to reflect on.
Thanks also for providing a bit of a review around how Camp #1 might think about morality for conscious AI. Really appreciate the responses!
A lot of people say this, but I’m pretty confident that it’s false. In Why it’s so hard to talk about Consciousness, I wrote this on functionalism (… where camp #1 and #2 roughly correspond to being illusionists vs. realists on consicousness; that’s the short explanation, the longer one is, well, in the post! …):
As far as I can tell, the majority view on LW (though not by much, but I’d guess it’s above 50%) is just Camp #1/illusionism. Now these people describe their view as functionalism sometimes, which makes it very understandable why you’ve reached that conclusion.[1] But this type of functionalism is completely different from the type that you are writing about in this article. They are mutually imcompatible views with entirely different moral implications.
Camp #2 style functionalism is not a fringe view on LW, but it’s not a majority. If I had to guess, just pulling a number out of my hat here, perhaps a quarter of people here believe this.
Again, it’s understandable that you think this, and you’re not the first. But this is really not the case. The main alternative to functionalism is illusionism (which like I said, is probably a small majority view on LW, but in any case hovers close to 50%). But even if we ignore that and only talk about realist people, biological essentialism wouldn’t be the next most popular view. I doubt that even 5% of people on the platform believe anything like this.
There are reasons to reject AI consciousness other than saying that biology is special. My go-to example here is always Integrated Information Theory (IIT) because it’s still the most popular realist theory in the literature. IIT doesn’t have anything about biological essentialism in its formalism, it’s in fact a functionalist theory (at least with how I define the term), and yet it implies that digital computers aren’t conscious. IIT is also highly unpopular on LW and I personally agree that’s it’s completely wrong, but it nonetheless makes the point that biological essentialism is not required to reject digital-computer-consciousness. In fact, rejecting functionlism is not required for rejecting digital-computer consciousness.
This is completely unscientific and just based on my gut so don’t take it too seriously, but here would be my honest off-the-cuff attempt at drawing a Venn diagram of the opinion spread on LessWrong with size of circles representing proportion of views
Relatedly, EuanMcLean just wrote this sequence against functionalism assuming that this was what everyone believed, only to realize halfway through that the majority view is actually something else.
Thanks for your response!
Your original post on the Camp #1/Camp #2 distinction is excellent, thanks for linking (I wish I’d read it before making this post!)
I realise now that I’m arguing from a Camp #2 perspective. Hopefully it at least holds up for the Camp #2 crowd. I probably should have used some weaker language in the original post instead of asserting that “this is the dominant position” if it’s actually only around ~25%.
Genuinely curious here, what are the moral implications of Camp #1/illusionism for AI systems? Are there any?
If consciousness is ‘just’ a pattern of information processing that leads systems to make claims about having experiences (rather than being some real property systems can have), would AI systems implementing similar patterns deserve moral consideration? Even if both human and AI consciousness are ‘illusions’ in some sense, we still seem to care about human wellbeing—so should we extend similar consideration to AI systems that process information in analogous ways? Interested in how illusionists think about this (not sure if you identify with Illusionism but it seems like you’re aware of the general position and would be a knowledgeable person to ask.)
Again, genuine question. I’ve often heard that IIT implies digital computers are not conscious because a feedforward network necessarily has zero phi (there’s no integration of information because the weights are not being updated.) Question is, isn’t this only true during inference (i.e. when we’re talking to the model?) During its training the model would be integrating a large amount of information to update its weights so would have a large phi.
(responding to this one first because it’s easier to answer)
You’re right on with feed-forward networks having zero Φ, but > this is actually not the reason why
digitalVon Neumann[1] computers can’t be conscious under IIT. The reason as by Tononi himself is thatSo in other words, the brain has many different, concurrently active elements—the neurons—so the analysis based on IIT gives this rich computational graph where they are all working together. The same would presumably be true for a computer with neuromorphic hardware, even if it’s digital. But in the Von-Neumann architecture, there are these few physical components who handle all these logically separate things in rapid succession.
Another potentially relevant lens is that, in the Von-Neumann architecture, in some sense the only “active” components are the computer clocks, whereas even the CPUs and GPUs are ultimately just “passive” components that process inputs signals. Like the CPU gets fed the 1-0-1-0-1 clock signal plus the signals representing processor instructions and the signals representing data and then processes them. I think that would be another point that one could care about even under a functionalist lens.
I think there is no consensus on this question. One position I’ve seen articulated is essentially “consciousness is not a crisp category but it’s the source of value anyway”
Another position I’ve seen is “value is actually about something other than consciousness”. Dennett also says this, but I’ve seen it on LessWrong as well (several times iirc, but don’t remember any specific one).
And a third position I’ve seen articulated once is “consciousness is the source of all value, but since it doesn’t exist, that means there is no value (although I’m still going to live as though there is)”. (A prominent LW person articulated this view to me but it was in PMs and idk if they’d be cool with making it public, so I won’t say who it was.)
Shouldn’t have said “digital computers” earlier actually, my bad.
Thanks for taking the time to respond.
The IIT paper which you linked is very interesting—I hadn’t previously internalised the difference between “large groups of neurons activating concurrently” and “small physical components handling things in rapid succession”. I’m not sure whether the difference actually matters for consciousness or whether it’s a curious artifact of IIT but it’s interesting to reflect on.
Thanks also for providing a bit of a review around how Camp #1 might think about morality for conscious AI. Really appreciate the responses!