Interesting! Graziano’s Attention Schema Theory is also basically the same: he proposes consciousness to be found in our models of our own attention, and that these models evolved to help control attention. To be clear, though, it’s not the mere fact of modelling or controlling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomenal consciousness.[1]
In the attention schema theory (AST), having an automatically constructed self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. The reason why such a self-model evolved in the brains of complex animals is that it serves the useful role of modeling, and thus helping to control, the powerful and subtle process of attention, by which the brain seizes on and deeply processes information.
Suppose the machine has a much richer model of attention. Somehow, attention is depicted by the model as a Moray eel darting around the world. Maybe the machine already had need for a depiction of Moray eels, and it coapted that model for monitoring its own attention. Now we plug in the speech engine. Does the machine claim to have consciousness? No. It claims to have an external Moray eel.
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
To be clear, though, it’s not the mere fact of modelling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomwnal consciousness
A big part of the issue is self-reference is hard to reason around, and not just for humans, and it took until the 20th century to get a formal justification for how self-reference could work at all, combined with consciousness being a polarizing and conflationary term since it determines in a lot of people’s ethical systems whether uploading is desirable, and whether they should have rights.
I’d also guess this as a good first guess, though it really depends on the self-modeling ability of animals in general, and in particular the more general the scenarios they have to end up in, the more they are conscious.
I’d weakly guess consciousness is closest to a continuous property, rather than a discrete one in general.
And yes, I think Graziano’s Attention Schema theory combined with Global Workspace Theory with it’s associated Global Neuronal Workspace is a key component in why human consciousness has the weird properties that it does (aside from the fact of being conscious, which a lot of animals are to some degree.)
Interesting! Graziano’s Attention Schema Theory is also basically the same: he proposes consciousness to be found in our models of our own attention, and that these models evolved to help control attention. To be clear, though, it’s not the mere fact of modelling or controlling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomenal consciousness.[1]
Graziano thinks mammals, birds and reptiles are conscious, is 50-50 on octopuses and very skeptical of fish and arthropods.
Graziano, 2021:
A big part of the issue is self-reference is hard to reason around, and not just for humans, and it took until the 20th century to get a formal justification for how self-reference could work at all, combined with consciousness being a polarizing and conflationary term since it determines in a lot of people’s ethical systems whether uploading is desirable, and whether they should have rights.
I’d also guess this as a good first guess, though it really depends on the self-modeling ability of animals in general, and in particular the more general the scenarios they have to end up in, the more they are conscious.
I’d weakly guess consciousness is closest to a continuous property, rather than a discrete one in general.
And yes, I think Graziano’s Attention Schema theory combined with Global Workspace Theory with it’s associated Global Neuronal Workspace is a key component in why human consciousness has the weird properties that it does (aside from the fact of being conscious, which a lot of animals are to some degree.)