Correspondingly, in the actual brain algorithm (“territory”), any possible thought can be represented by the cortex (by definition), but only one thought can be there at a time (to a first approximation, see §2.3 below).
I don’t think I agree with the “but only one thought can be there at a time” part.
I think (but not at all sure) that different (sensory) modalities can process (in unfocused states) unrelated information/thoughts simultaneously.
Here’s my current rough model that might be wrong in many ways:
I think each of the workspaces have their own short memory which spans maybe like 2seconds. (E.g. sometimes when I become aware of an earworm I feel like it must’ve been there for at least 2 seconds already even though I previously wasn’t aware of it.)
(I currently don’t know whether there is a “space of awareness” or whether (as in the image above) it’s just that some workspace is broadcasting its contents broadly.)
(I think it sometimes happened to me that I was daydreaming about something which was written to memory (although not very strongly) but where I didn’t notice that I was thinking that and only later realized “oh wait I thought about that before”. So perhaps sth was only weakly attended on and left some trace in memory but then either the “produce self-reflective thought” or the “associate self-reflective thought with homunculus” step didn’t happen. But idk I’d want to re-observe this phenomenon to be sure it’s not just fabricated. EDIT: Actually from having made a break and daydreamed a bit now, I think it rather is the case that usually one decently often has a self-reflective thought (about what object-level thoughts one thinks) attended on so there’s also a memory of oneself thinking about sth. However sometimes in deep daydreams (or other states like flow) self-reflective thoughts are only very rarely attended on and sometimes never. I think often I notice roughly about what I was daydreaming afterwards, but maybe sometimes the daydream is disrupted by something outside so that then there wasn’t a self-reflective thought written to memory, which might then later make me slightly surprised when I notice that I e.g. daydreamed about a particular movie earlier but didn’t notice that at the time. Aka: I think the self-reflective thoughts still get produced unconsciously when daydreaming, but they might not get attended on so no self-associated memory of the event is formed.)
I don’t think I agree with the “but only one thought can be there at a time” part.
I’m probably just defining “thought” more broadly than you. The cortex has many areas. The auditory parts can be doing some auditory thing, and simultaneously the motor parts can be doing some motor thing, and all that together constitutes (what I call) a “thought”.
I don’t think anything in the series really hinges on the details here. “Conscious awareness” is not conceptualized as some atomic concept where there’s nothing else to say about it. If you ask people to describe their conscious awareness, they can go on and on for hours. My claim is that: when they go on and on for hours describing “conscious awareness”, all the rich details that they’re describing can be mapped onto accurate claims about properties of the cortex and its activation states. (E.g. the next part of this comment.)
I think each of the workspaces have their own short memory which spans maybe like 2seconds.
I agree that, if some part of the cortex is in activation state A at time T, those particular neurons (that constitute A) will get less and less active over the course of a second or two, such that at time T+1 second, it’s still possible for other parts of the cortex to reactivate activation state A via an appropriate query. I don’t think all traces of A immediately disappear entirely every time a new activation state appears.
Again, I think the capability of the cortex to have more than one incompatible generative models active simultaneously is extremely limited, and that for most purposes we should think of it as only having a single (MAP estimate) generative model active. But the capability to track multiple incompatible models simultaneously does exist, to some extent. I think this slow-fading thing is one application of that capability. Another is the time-extended probabilistic inference thing that I talked about in §2.3.
I don’t think I agree with the “but only one thought can be there at a time” part.
I think (but not at all sure) that different (sensory) modalities can process (in unfocused states) unrelated information/thoughts simultaneously.
Here’s my current rough model that might be wrong in many ways:
I think each of the workspaces have their own short memory which spans maybe like 2seconds. (E.g. sometimes when I become aware of an earworm I feel like it must’ve been there for at least 2 seconds already even though I previously wasn’t aware of it.)
(I currently don’t know whether there is a “space of awareness” or whether (as in the image above) it’s just that some workspace is broadcasting its contents broadly.)
(I think it sometimes happened to me that I was daydreaming about something which was written to memory (although not very strongly) but where I didn’t notice that I was thinking that and only later realized “oh wait I thought about that before”.
So perhaps sth was only weakly attended on and left some trace in memory but then either the “produce self-reflective thought” or the “associate self-reflective thought with homunculus” step didn’t happen. But idk I’d want to re-observe this phenomenon to be sure it’s not just fabricated.EDIT: Actually from having made a break and daydreamed a bit now, I think it rather is the case that usually one decently often has a self-reflective thought (about what object-level thoughts one thinks) attended on so there’s also a memory of oneself thinking about sth. However sometimes in deep daydreams (or other states like flow) self-reflective thoughts are only very rarely attended on and sometimes never. I think often I notice roughly about what I was daydreaming afterwards, but maybe sometimes the daydream is disrupted by something outside so that then there wasn’t a self-reflective thought written to memory, which might then later make me slightly surprised when I notice that I e.g. daydreamed about a particular movie earlier but didn’t notice that at the time. Aka: I think the self-reflective thoughts still get produced unconsciously when daydreaming, but they might not get attended on so no self-associated memory of the event is formed.)I’m probably just defining “thought” more broadly than you. The cortex has many areas. The auditory parts can be doing some auditory thing, and simultaneously the motor parts can be doing some motor thing, and all that together constitutes (what I call) a “thought”.
I don’t think anything in the series really hinges on the details here. “Conscious awareness” is not conceptualized as some atomic concept where there’s nothing else to say about it. If you ask people to describe their conscious awareness, they can go on and on for hours. My claim is that: when they go on and on for hours describing “conscious awareness”, all the rich details that they’re describing can be mapped onto accurate claims about properties of the cortex and its activation states. (E.g. the next part of this comment.)
I agree that, if some part of the cortex is in activation state A at time T, those particular neurons (that constitute A) will get less and less active over the course of a second or two, such that at time T+1 second, it’s still possible for other parts of the cortex to reactivate activation state A via an appropriate query. I don’t think all traces of A immediately disappear entirely every time a new activation state appears.
Again, I think the capability of the cortex to have more than one incompatible generative models active simultaneously is extremely limited, and that for most purposes we should think of it as only having a single (MAP estimate) generative model active. But the capability to track multiple incompatible models simultaneously does exist, to some extent. I think this slow-fading thing is one application of that capability. Another is the time-extended probabilistic inference thing that I talked about in §2.3.
ok yep sounds like we basically agree i think.