I have a couple of questions about conscious awareness, and a question about intuitive self-models in general. They might be out-of-scope for this series, though.
Questions 1 and 2 are just for my curiosity. Question 3 seems more important to me, but I can imagine that it might be a dangerous capabilities question, so I acknowledge you might not want to answer it for that reason.
In 2.4.2, you say that things can only get stored in episodic memory if they were in conscious awareness. People can sometimes remember events from their dreams. Does that mean that people have conscious awareness during (at least some of) their dreams?
Is there anything you can say about what unconsciousness is? i.e. Why is there nothing in conscious awareness during this state? - Is the cortex not thinking any (coherent?) thoughts? (I have not studied unconsciousness.)
About the predictive learning algorithm in the human brain—what types of incoming data does it have access to? What types of incoming data is it building models to predict? I understand that it would be predicting data from your senses of vision, hearing, and touch, etc. But when it comes to build an intuitive self-model, does it also have data that directly represents what the brain algorithm is doing (at some level)? Or does it have to infer the brain algorithm from its effect on the external sense data (e.g. motor control to change what you’re looking at)?
In the case of conscious awareness, does the predictive algorithm receive “the thought currently active in the cortex” as an input to predict? Or does it have to infer the thought when trying to predict something else?
In 2.4.2, you say that things can only get stored in episodic memory if they were in conscious awareness. People can sometimes remember events from their dreams. Does that mean that people have conscious awareness during (at least some of) their dreams?
My answer is basically “yes”, although different people might have different definitions of the concept “conscious awareness”. In other words, in terms of map-territory correspondence, I claim there’s a phenomenon P in the territory (some cortex neurons / concepts / representations are active at any given time, and others are not, as described in the post), and this phenomenon P gets incorporated into everyone’s map, and that’s what I’m talking about in this post. And this phenomenon P is part of the territory during dreaming too.
But it’s not necessarily the case that everyone will define the specific English-language phrase “conscious awareness” to indicate the part of their map whose boundaries are drawn exactly around that phenomenon P. Instead, for example, some people might feel like the proper definition of “conscious awareness” is something closer to “the phenomenon P in the case when I’m awake, and not drugged, etc.”, which is really P along with various additional details and connotations and associations, such as the links to voluntary control and memory. Those people would still be able to conceptualize the phenomenon P, of course, and it would still be a big part of their mental worlds, but to point to it you would need a whole sentence, not just the two words “conscious awareness”.
Is there anything you can say about what unconsciousness is? i.e. Why is there nothing in conscious awareness during this state? - Is the cortex not thinking any (coherent?) thoughts? (I have not studied unconsciousness.)
I think sometimes the cortex isn’t doing much of anything, or at least, not running close-enough-to-normal that neurons representing thoughts can be active.
Alternatively, maybe the cortex is doing its usual thing of activating groups of neurons that represent thoughts and concepts—but it’s neither forming memories (that last beyond a few seconds), nor taking immediate actions. Then you “look unconscious” from the outside, and you also “look unconscious” from the perspective of your future self. There’s no trace of what the cortex was doing, even if it was doing something. Maybe brain scans can distinguish that possibility though.
About the predictive learning algorithm in the human brain…
I think I declare that out-of-scope for this series, from some combination of “I don’t know the complete answer” and “it might be a dangerous capabilities question”. Those are related, of course—when I come upon things that might be dangerous capabilities questions, I often don’t bother trying to answer them :-P
Thank you for writing this series.
I have a couple of questions about conscious awareness, and a question about intuitive self-models in general. They might be out-of-scope for this series, though.
Questions 1 and 2 are just for my curiosity. Question 3 seems more important to me, but I can imagine that it might be a dangerous capabilities question, so I acknowledge you might not want to answer it for that reason.
In 2.4.2, you say that things can only get stored in episodic memory if they were in conscious awareness. People can sometimes remember events from their dreams. Does that mean that people have conscious awareness during (at least some of) their dreams?
Is there anything you can say about what unconsciousness is? i.e. Why is there nothing in conscious awareness during this state? - Is the cortex not thinking any (coherent?) thoughts? (I have not studied unconsciousness.)
About the predictive learning algorithm in the human brain—what types of incoming data does it have access to? What types of incoming data is it building models to predict? I understand that it would be predicting data from your senses of vision, hearing, and touch, etc. But when it comes to build an intuitive self-model, does it also have data that directly represents what the brain algorithm is doing (at some level)? Or does it have to infer the brain algorithm from its effect on the external sense data (e.g. motor control to change what you’re looking at)?
In the case of conscious awareness, does the predictive algorithm receive “the thought currently active in the cortex” as an input to predict? Or does it have to infer the thought when trying to predict something else?
Good questions, thanks!
My answer is basically “yes”, although different people might have different definitions of the concept “conscious awareness”. In other words, in terms of map-territory correspondence, I claim there’s a phenomenon P in the territory (some cortex neurons / concepts / representations are active at any given time, and others are not, as described in the post), and this phenomenon P gets incorporated into everyone’s map, and that’s what I’m talking about in this post. And this phenomenon P is part of the territory during dreaming too.
But it’s not necessarily the case that everyone will define the specific English-language phrase “conscious awareness” to indicate the part of their map whose boundaries are drawn exactly around that phenomenon P. Instead, for example, some people might feel like the proper definition of “conscious awareness” is something closer to “the phenomenon P in the case when I’m awake, and not drugged, etc.”, which is really P along with various additional details and connotations and associations, such as the links to voluntary control and memory. Those people would still be able to conceptualize the phenomenon P, of course, and it would still be a big part of their mental worlds, but to point to it you would need a whole sentence, not just the two words “conscious awareness”.
I think sometimes the cortex isn’t doing much of anything, or at least, not running close-enough-to-normal that neurons representing thoughts can be active.
Alternatively, maybe the cortex is doing its usual thing of activating groups of neurons that represent thoughts and concepts—but it’s neither forming memories (that last beyond a few seconds), nor taking immediate actions. Then you “look unconscious” from the outside, and you also “look unconscious” from the perspective of your future self. There’s no trace of what the cortex was doing, even if it was doing something. Maybe brain scans can distinguish that possibility though.
I think I declare that out-of-scope for this series, from some combination of “I don’t know the complete answer” and “it might be a dangerous capabilities question”. Those are related, of course—when I come upon things that might be dangerous capabilities questions, I often don’t bother trying to answer them :-P