This is my biggest view on what consciousness actually is, in that it’s essentially a special case of modeling the world, where in order to give your own body at one time alive, you need to have a model of the body and brain, and that’s what consciousness basically is, a model of ourselves.
While the original good regulator theorem has several severe issues in this, there’s new good regulator theorems by John Wentworth that do actually require an optimal regulator to use a model, and this is a central example of a rationalist taking an open problem in mathematics and control theory, and significantly advancing the state of the art with proper proofs, and actually getting non-trivial generalizations of the theorems, combined with the fact that these were technically unsolved for 50+ years, which is a pretty good demonstration of rationalism/intelligence on it’s best days:
This implies that consciousness is actually pretty simple and natural as a solution to practical problems, and also has the implication that as you try to make an intelligence do more and more tasks, you are going to have to pick narrower and narrower spaces to avoid conscious AI.
Another interesting point is that this suggests that lots of animals are conscious a little bit, and my view is consciousness is closer to a continuous property, but some things like rocks have no need to model themselves, and thus aren’t conscious.
A big reason why I don’t care too much right now is I’m focused on other problems, like AGI safety, and more importantly my morality doesn’t rely on consciousness for other beings to be valuable to me.
Interesting! Graziano’s Attention Schema Theory is also basically the same: he proposes consciousness to be found in our models of our own attention, and that these models evolved to help control attention. To be clear, though, it’s not the mere fact of modelling or controlling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomenal consciousness.[1]
In the attention schema theory (AST), having an automatically constructed self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. The reason why such a self-model evolved in the brains of complex animals is that it serves the useful role of modeling, and thus helping to control, the powerful and subtle process of attention, by which the brain seizes on and deeply processes information.
Suppose the machine has a much richer model of attention. Somehow, attention is depicted by the model as a Moray eel darting around the world. Maybe the machine already had need for a depiction of Moray eels, and it coapted that model for monitoring its own attention. Now we plug in the speech engine. Does the machine claim to have consciousness? No. It claims to have an external Moray eel.
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
To be clear, though, it’s not the mere fact of modelling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomwnal consciousness
A big part of the issue is self-reference is hard to reason around, and not just for humans, and it took until the 20th century to get a formal justification for how self-reference could work at all, combined with consciousness being a polarizing and conflationary term since it determines in a lot of people’s ethical systems whether uploading is desirable, and whether they should have rights.
I’d also guess this as a good first guess, though it really depends on the self-modeling ability of animals in general, and in particular the more general the scenarios they have to end up in, the more they are conscious.
I’d weakly guess consciousness is closest to a continuous property, rather than a discrete one in general.
And yes, I think Graziano’s Attention Schema theory combined with Global Workspace Theory with it’s associated Global Neuronal Workspace is a key component in why human consciousness has the weird properties that it does (aside from the fact of being conscious, which a lot of animals are to some degree.)
This is my biggest view on what consciousness actually is, in that it’s essentially a special case of modeling the world, where in order to give your own body at one time alive, you need to have a model of the body and brain, and that’s what consciousness basically is, a model of ourselves.
While the original good regulator theorem has several severe issues in this, there’s new good regulator theorems by John Wentworth that do actually require an optimal regulator to use a model, and this is a central example of a rationalist taking an open problem in mathematics and control theory, and significantly advancing the state of the art with proper proofs, and actually getting non-trivial generalizations of the theorems, combined with the fact that these were technically unsolved for 50+ years, which is a pretty good demonstration of rationalism/intelligence on it’s best days:
https://www.lesswrong.com/posts/Dx9LoqsEh3gHNJMDk/fixing-the-good-regulator-theorem
This implies that consciousness is actually pretty simple and natural as a solution to practical problems, and also has the implication that as you try to make an intelligence do more and more tasks, you are going to have to pick narrower and narrower spaces to avoid conscious AI.
Another interesting point is that this suggests that lots of animals are conscious a little bit, and my view is consciousness is closer to a continuous property, but some things like rocks have no need to model themselves, and thus aren’t conscious.
A big reason why I don’t care too much right now is I’m focused on other problems, like AGI safety, and more importantly my morality doesn’t rely on consciousness for other beings to be valuable to me.
Interesting! Graziano’s Attention Schema Theory is also basically the same: he proposes consciousness to be found in our models of our own attention, and that these models evolved to help control attention. To be clear, though, it’s not the mere fact of modelling or controlling attention, but that attention is modelled in a way that makes it seem mysterious or unphysical, and that’s what explains our intuitions about phenomenal consciousness.[1]
Graziano thinks mammals, birds and reptiles are conscious, is 50-50 on octopuses and very skeptical of fish and arthropods.
Graziano, 2021:
A big part of the issue is self-reference is hard to reason around, and not just for humans, and it took until the 20th century to get a formal justification for how self-reference could work at all, combined with consciousness being a polarizing and conflationary term since it determines in a lot of people’s ethical systems whether uploading is desirable, and whether they should have rights.
I’d also guess this as a good first guess, though it really depends on the self-modeling ability of animals in general, and in particular the more general the scenarios they have to end up in, the more they are conscious.
I’d weakly guess consciousness is closest to a continuous property, rather than a discrete one in general.
And yes, I think Graziano’s Attention Schema theory combined with Global Workspace Theory with it’s associated Global Neuronal Workspace is a key component in why human consciousness has the weird properties that it does (aside from the fact of being conscious, which a lot of animals are to some degree.)