Again, genuine question. I’ve often heard that IIT implies digital computers are not conscious because a feedforward network necessarily has zero phi (there’s no integration of information because the weights are not being updated.) Question is, isn’t this only true during inference (i.e. when we’re talking to the model?) During its training the model would be integrating a large amount of information to update its weights so would have a large phi.
(responding to this one first because it’s easier to answer)
You’re right on with feed-forward networks having zero Φ, but > this is actually not the reason why digital Von Neumann[1] computers can’t be conscious under IIT. The reason as by Tononi himself is that
[...] Of course, the physical computer that is running the simulation is just as real as the brain. However, according to the principles of IIT, one should analyse its real physical components—identify elements, say transistors, define their cause–effect repertoires, find concepts, complexes and determine the spatio-temporal scale at which Φ reaches a maximum. In that case, we suspect that the computer would likely not form a large complex of high Φmax , but break down into many mini-complexes of low Φmax max . This is due to the small fan-in and fan-out of digital circuitry (figure 5c), which is likely to yield maximum cause–effect power at the fast temporal scale of the computer clock.
So in other words, the brain has many different, concurrently active elements—the neurons—so the analysis based on IIT gives this rich computational graph where they are all working together. The same would presumably be true for a computer with neuromorphic hardware, even if it’s digital. But in the Von-Neumann architecture, there are these few physical components who handle all these logically separate things in rapid succession.
Another potentially relevant lens is that, in the Von-Neumann architecture, in some sense the only “active” components are the computer clocks, whereas even the CPUs and GPUs are ultimately just “passive” components that process inputs signals. Like the CPU gets fed the 1-0-1-0-1 clock signal plus the signals representing processor instructions and the signals representing data and then processes them. I think that would be another point that one could care about even under a functionalist lens.
Genuinely curious here, what are the moral implications of Camp #1/illusionism for AI systems?
I think there is no consensus on this question. One position I’ve seen articulated is essentially “consciousness is not a crisp category but it’s the source of value anyway”
I think consciousness will end up looking something like ‘piston steam engine’, if we’d evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.
Piston steam engines aren’t a 100% crisp natural kind; there are other machines that are pretty similar to them; there are many different ways to build a piston steam engine; and, sure, in a world where our core evolved values were tied up with piston steam engines, it could shake out that we care at least a little about certain states of thermostats, rocks, hand gliders, trombones, and any number of other random things as a result of very distant analogical resemblances to piston steam engines.
But it’s still the case that a piston steam engine is a relatively specific (albeit not atomically or logically precise) machine; and it requires a bunch of parts to work in specific ways; and there isn’t an unbroken continuum from ‘rock’ to ‘piston steam engine’, rather there are sharp (though not atomically sharp) jumps when you get to thresholds that make the machine work at all.
Another position I’ve seen is “value is actually about something other than consciousness”. Dennett also says this, but I’ve seen it on LessWrong as well (several times iirc, but don’t remember any specific one).
And a third position I’ve seen articulated once is “consciousness is the source of all value, but since it doesn’t exist, that means there is no value (although I’m still going to live as though there is)”. (A prominent LW person articulated this view to me but it was in PMs and idk if they’d be cool with making it public, so I won’t say who it was.)
The IIT paper which you linked is very interesting—I hadn’t previously internalised the difference between “large groups of neurons activating concurrently” and “small physical components handling things in rapid succession”. I’m not sure whether the difference actually matters for consciousness or whether it’s a curious artifact of IIT but it’s interesting to reflect on.
Thanks also for providing a bit of a review around how Camp #1 might think about morality for conscious AI. Really appreciate the responses!
(responding to this one first because it’s easier to answer)
You’re right on with feed-forward networks having zero Φ, but > this is actually not the reason why
digitalVon Neumann[1] computers can’t be conscious under IIT. The reason as by Tononi himself is thatSo in other words, the brain has many different, concurrently active elements—the neurons—so the analysis based on IIT gives this rich computational graph where they are all working together. The same would presumably be true for a computer with neuromorphic hardware, even if it’s digital. But in the Von-Neumann architecture, there are these few physical components who handle all these logically separate things in rapid succession.
Another potentially relevant lens is that, in the Von-Neumann architecture, in some sense the only “active” components are the computer clocks, whereas even the CPUs and GPUs are ultimately just “passive” components that process inputs signals. Like the CPU gets fed the 1-0-1-0-1 clock signal plus the signals representing processor instructions and the signals representing data and then processes them. I think that would be another point that one could care about even under a functionalist lens.
I think there is no consensus on this question. One position I’ve seen articulated is essentially “consciousness is not a crisp category but it’s the source of value anyway”
Another position I’ve seen is “value is actually about something other than consciousness”. Dennett also says this, but I’ve seen it on LessWrong as well (several times iirc, but don’t remember any specific one).
And a third position I’ve seen articulated once is “consciousness is the source of all value, but since it doesn’t exist, that means there is no value (although I’m still going to live as though there is)”. (A prominent LW person articulated this view to me but it was in PMs and idk if they’d be cool with making it public, so I won’t say who it was.)
Shouldn’t have said “digital computers” earlier actually, my bad.
Thanks for taking the time to respond.
The IIT paper which you linked is very interesting—I hadn’t previously internalised the difference between “large groups of neurons activating concurrently” and “small physical components handling things in rapid succession”. I’m not sure whether the difference actually matters for consciousness or whether it’s a curious artifact of IIT but it’s interesting to reflect on.
Thanks also for providing a bit of a review around how Camp #1 might think about morality for conscious AI. Really appreciate the responses!