Like I said in this post, I think the contents of conscious awareness corresponds more-or-less to what’s happening in the cortex. The homolog to the cortex in non-mammal vertebrates is called the “pallium”, and the pallium along with the striatum and a few other odds and ends comprises the “telencephalon”.
I don’t know anything about octopuses, but I would very surprised if the fish pallium lacked recurrent connections. I don’t think your link says that though. The relevant part seems to be:
While the fish retina projects diffusely to nine nuclei in the diencephalon, its main target is the midbrain optic tectum (Burrill and Easter, 1994). Thus, the fish visual system is highly parcellated, at least, in the sub-telencephalonic regions. Whole brain imaging during visuomotor reflexes reveals widespread neural activity in the diencephalon, midbrain and hindbrain in zebrafish, but these regions appear to act mostly as feedforward pathways (Sarvestani et al., 2013; Kubo et al., 2014; Portugues et al., 2014). When recurrent feedback is present (e.g., in the brainstem circuitry responsible for eye movement), it is weak and usually arises only from the next nucleus within a linear hierarchical circuit (Joshua and Lisberger, 2014). In conclusion, fish lack the strong reciprocal and networked circuitry required for conscious neural processing.
This passage is just about the “sub-telencephalonic regions”, i.e. they’re not talking about the pallium.
To be clear, the stuff happening in sub-telencephalonic regions (e.g. the brainstem) is often relevant to consciousness, of course, even if it’s not itself part of consciousness. One reason is because stuff happening in the brainstem can turn into interoceptive sensory inputs to the pallium / cortex. Another reason is that stuff happening in the brainstem can directly mess with what’s happening in the pallium / cortex in other ways besides serving as sensory inputs. One example is (what I call) the valence signal which can make conscious thoughts either stay or go away. Another is (what I call) “involuntary attention”.
Oh and sorry just to be clear, does this mean you do think that recurrent connections in the cortex are essential for forming intuitive self-models / the algorithm modelling properties of itself?
Umm, I would phrase it as: there’s a particular computational task called approximate Bayesian probabilistic inference, and I think the cortex / pallium performs that task (among others) in vertebrates, and I don’t think it’s possible for biological neurons to perform that task without lots of recurrent connections.
And if there’s an organism that doesn’t perform that task at all, then it would have neither an intuitive self-model nor an intuitive model of anything else, at least not in any sense that’s analogous to ours and that I know how to think about.
To be clear: (1) I think you can have some brain region with lots of recurrent connections that has nothing to do with intuitive modeling, (2) it’s possible for a brain region to perform approximate Bayesian probabilistic inference and have recurrent connections, but still not have an intuitive self-model, for example if the hypothesis space is closer to a simple lookup table rather than a complicated hypothesis space involving complex compositional interacting entities etc.
Thank you for the response! I am embarrassed that I didn’t realise that the lack of recurrent connections referenced in the sources were referring to regions outside of their cortex-equivalent, should’ve read through more thoroughly :) I am pretty up-to-date in terms of those things.
Can I additionally ask why do you think some invertebrates likely have intuitive self models as well? Would you restrict this possibility to basically just cephalopods and the like (as many do, being the most intelligent invertebrates), or would you likely extend to it to creatures like arthropods as well? (what’s your fuzzy estimate that an ant could model itself as having awareness?)
why do you think some invertebrates likely have intuitive self models as well?
I didn’t quite say that. I made a weaker claim that “presumably many invertebrates [are] active agents with predictive learning algorithms in their brain, and hence their predictive learning algorithms are…incentivized to build intuitive self-models”.
It seems reasonable to presume that octopuses have predictive learning algorithms in their nervous systems, because AFAIK that’s the only practical way to wind up with a flexible and forward-looking understanding of the consequences of your actions, and octopuses (at least) are clearly able to plan ahead in a flexible way.
However, “incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
Would you restrict this possibility to basically just cephalopods and the like
“incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
Right of course. So would this imply that organisms that have very simple brains / roles in their environment (for example: not needing to end up with a flexible understanding of the consequences of your actions), would have a very weak incentive too?
And if an intuitive self model helps with things like flexible planning then even though its a creation of the ‘blank-slate’ cortex, surely some organisms would have a genome that sets up certain hyperparameters that would encourage it no, since it would seem strange for something pretty seriously adaptive being purely an ‘epiphenomenon’ (as per language being facilitated by hyperparameters encoded in the genome)? But also its fine if you also just don’t have an opinion on this haha. (also: wouldn’t some animals not have an incentive to create self-models if creating a self-model would not seriously increase performance in any relevant domain? Like a dog trying to create an in-depth model of the patterns that appear on computer monitors maybe)
It does seem like flexible behaviour in some general sense is perfectly possible without awareness (as I’m sure you know) but I understand that awareness would surely help a whole lot.
You might have no opinion on this at all but would you have any vague guess at all as to why you can only verbally report items in awareness? (cause even if awareness is a model of serial processing and verbal report requires that kind of global projection / high state of attention, I’ve still seen studies showing that stimuli can be globally accessible / globally projected in the brain and yet still not consciously accessible, presumably in your model due to a lack of modelling of that global-access)
Or no sorry I’ve gone back over the papers and I’m still a bit confused.
Brian Key seems to specifically claim fish and octopuses cannot feel pain in reference to the recurrent connections of their pallium (+ the octopus equivalent which seems to be the supraesophageal complex).
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry [...] Although the medial pallium is weakly homologous to the mammalian amygdala, these structures principally possess feedforward circuits that execute nociceptive defensive behaviours
However he then also claims
This conclusion is supported by lesion studies that have shown that neither the medial pallium nor the whole pallium is required for escape behaviours from electric shock stimuli in fish (Portavella et al., 2004). Therefore, given that the pallium is not even involved in nociceptive behaviours, it could not be inferred that it plays a role in pain.
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
Which makes me think his claim (in regards to fish consciousness at least) is really just that the feedback circuitry required for the brain to make predictions on its own algorithm (and thus become subjectively aware) just isn’t strong enough / is too minimal? He does source a pretty vast amount of information to try and justify this, so much I haven’t meaningfully made a start on it yet, its pretty overwhelming. Overall I just feel more uncertain.
I’ve gone back over his paper on octopuses with my increased understanding and he specifically seems to make reference to a lack of feedback connections between lobes (not just subesophageal lobes). Specifically he seems to focus on the fact that the posterior buccal lobe (which is supraesophageal) has ‘no second-order sensory fibres (that) subsequently project from the brachial lobe to the inferior frontal system’ meaning ‘it lacks the ability to feedback prediction errors to these lobes so as to regulate their models’. I honestly don’t know if this places doubt on the ability of octopuses to make intuitive self-models or not in your theory, since I suppose it would depend on the information contained in the posterior buccal lobe, and what recurrent connections exist between the other supraesophageal lobes. Figure 2 has a wiring diagram for a system in the supraesophageal brain as well as a general overview of brain circuity in processing noxious stimuli in figure 7, I would be very interested in trying to understand through what connection the octopus (or even the fish) could plausibly gain information it could use to make predictions about its own algorithm.
I might be totally off on this, but to make a prediction about something like a state of attention / cortical seriality would surely require feedback connections from the output higher level areas where the algorithm is doing ‘thinking’ to earlier layers, given that for one awareness of a stimuli seems to allow greater control of attention directed towards that stimuli, meaning the idea of awareness must have some sort of top-down influence on the visual cortex no?
This makes me wonder, is it higher layers of the cortex that actually generate the predictive model of awareness, or is the local regions that predictive the awareness concept due to feedback from the higher levels? I’m trying to construct some diagram in my head by which the brain models its own algorithm but I’m a bit confused I think.
You are not obliged to give any in-depth response to this I’ve just become interested especially due to the similarities between your model and Key’s and yet the serious potential ethical consequences of the differences.
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry
Yeah that doesn’t mean much in itself: “Laminated and columnar” is how the neurons are arranged in space, but what matters algorithmically is how they’re connected. The bird pallium is neither laminated nor columnar, but is AFAICT functionally equivalent to a mammal cortex.
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
My opinion (which is outside the scope of this series) is: (1) mammals without a cortex are not conscious, and (2) mammals without a cortex show nociceptive behaviors, and (3) nociceptive behaviors are not in themselves proof of “feeling pain” in the sense of consciousness. Argument for (3): You can also make a very simple mechanical mechanism (e.g. a bimetallic strip attached to a mousetrap-type mechanism) that quickly “recoils” from touching hot surfaces, but it seems pretty implausible that this mechanical mechanism “feels pain”.
(I think we’re in agreement on this?)
~~
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
what matters algorithmically is how they’re connected
I just realised that quote didn’t meant what I thought it did. But yes I do understand this and Key seems to think the recurrent connections just aren’t strong (they are ‘diffusely interconnected’. but whether this means they have an intuitive self model or not honestly who knows, do you have any ideas of how you’d test it? maybe like Graziano does with attentional control?)
(I think we’re in agreement on this?)
Oh yes definitely.
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
Heheh that’s alright I wasn’t expecting you too thanks for thinking about it for a moment anyway. I will simply have to learn myself.
Like I said in this post, I think the contents of conscious awareness corresponds more-or-less to what’s happening in the cortex. The homolog to the cortex in non-mammal vertebrates is called the “pallium”, and the pallium along with the striatum and a few other odds and ends comprises the “telencephalon”.
I don’t know anything about octopuses, but I would very surprised if the fish pallium lacked recurrent connections. I don’t think your link says that though. The relevant part seems to be:
This passage is just about the “sub-telencephalonic regions”, i.e. they’re not talking about the pallium.
To be clear, the stuff happening in sub-telencephalonic regions (e.g. the brainstem) is often relevant to consciousness, of course, even if it’s not itself part of consciousness. One reason is because stuff happening in the brainstem can turn into interoceptive sensory inputs to the pallium / cortex. Another reason is that stuff happening in the brainstem can directly mess with what’s happening in the pallium / cortex in other ways besides serving as sensory inputs. One example is (what I call) the valence signal which can make conscious thoughts either stay or go away. Another is (what I call) “involuntary attention”.
Oh and sorry just to be clear, does this mean you do think that recurrent connections in the cortex are essential for forming intuitive self-models / the algorithm modelling properties of itself?
Umm, I would phrase it as: there’s a particular computational task called approximate Bayesian probabilistic inference, and I think the cortex / pallium performs that task (among others) in vertebrates, and I don’t think it’s possible for biological neurons to perform that task without lots of recurrent connections.
And if there’s an organism that doesn’t perform that task at all, then it would have neither an intuitive self-model nor an intuitive model of anything else, at least not in any sense that’s analogous to ours and that I know how to think about.
To be clear: (1) I think you can have some brain region with lots of recurrent connections that has nothing to do with intuitive modeling, (2) it’s possible for a brain region to perform approximate Bayesian probabilistic inference and have recurrent connections, but still not have an intuitive self-model, for example if the hypothesis space is closer to a simple lookup table rather than a complicated hypothesis space involving complex compositional interacting entities etc.
Thank you for the response! I am embarrassed that I didn’t realise that the lack of recurrent connections referenced in the sources were referring to regions outside of their cortex-equivalent, should’ve read through more thoroughly :) I am pretty up-to-date in terms of those things.
Can I additionally ask why do you think some invertebrates likely have intuitive self models as well? Would you restrict this possibility to basically just cephalopods and the like (as many do, being the most intelligent invertebrates), or would you likely extend to it to creatures like arthropods as well? (what’s your fuzzy estimate that an ant could model itself as having awareness?)
I didn’t quite say that. I made a weaker claim that “presumably many invertebrates [are] active agents with predictive learning algorithms in their brain, and hence their predictive learning algorithms are…incentivized to build intuitive self-models”.
It seems reasonable to presume that octopuses have predictive learning algorithms in their nervous systems, because AFAIK that’s the only practical way to wind up with a flexible and forward-looking understanding of the consequences of your actions, and octopuses (at least) are clearly able to plan ahead in a flexible way.
However, “incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
No opinion.
Right of course. So would this imply that organisms that have very simple brains / roles in their environment (for example: not needing to end up with a flexible understanding of the consequences of your actions), would have a very weak incentive too?
And if an intuitive self model helps with things like flexible planning then even though its a creation of the ‘blank-slate’ cortex, surely some organisms would have a genome that sets up certain hyperparameters that would encourage it no, since it would seem strange for something pretty seriously adaptive being purely an ‘epiphenomenon’ (as per language being facilitated by hyperparameters encoded in the genome)? But also its fine if you also just don’t have an opinion on this haha. (also: wouldn’t some animals not have an incentive to create self-models if creating a self-model would not seriously increase performance in any relevant domain? Like a dog trying to create an in-depth model of the patterns that appear on computer monitors maybe)
It does seem like flexible behaviour in some general sense is perfectly possible without awareness (as I’m sure you know) but I understand that awareness would surely help a whole lot.
You might have no opinion on this at all but would you have any vague guess at all as to why you can only verbally report items in awareness? (cause even if awareness is a model of serial processing and verbal report requires that kind of global projection / high state of attention, I’ve still seen studies showing that stimuli can be globally accessible / globally projected in the brain and yet still not consciously accessible, presumably in your model due to a lack of modelling of that global-access)
Or no sorry I’ve gone back over the papers and I’m still a bit confused.
Brian Key seems to specifically claim fish and octopuses cannot feel pain in reference to the recurrent connections of their pallium (+ the octopus equivalent which seems to be the supraesophageal complex).
However he then also claims
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
Which makes me think his claim (in regards to fish consciousness at least) is really just that the feedback circuitry required for the brain to make predictions on its own algorithm (and thus become subjectively aware) just isn’t strong enough / is too minimal? He does source a pretty vast amount of information to try and justify this, so much I haven’t meaningfully made a start on it yet, its pretty overwhelming. Overall I just feel more uncertain.
I’ve gone back over his paper on octopuses with my increased understanding and he specifically seems to make reference to a lack of feedback connections between lobes (not just subesophageal lobes). Specifically he seems to focus on the fact that the posterior buccal lobe (which is supraesophageal) has ‘no second-order sensory fibres (that) subsequently project from the brachial lobe to the inferior frontal system’ meaning ‘it lacks the ability to feedback prediction errors to these lobes so as to regulate their models’. I honestly don’t know if this places doubt on the ability of octopuses to make intuitive self-models or not in your theory, since I suppose it would depend on the information contained in the posterior buccal lobe, and what recurrent connections exist between the other supraesophageal lobes. Figure 2 has a wiring diagram for a system in the supraesophageal brain as well as a general overview of brain circuity in processing noxious stimuli in figure 7, I would be very interested in trying to understand through what connection the octopus (or even the fish) could plausibly gain information it could use to make predictions about its own algorithm.
I might be totally off on this, but to make a prediction about something like a state of attention / cortical seriality would surely require feedback connections from the output higher level areas where the algorithm is doing ‘thinking’ to earlier layers, given that for one awareness of a stimuli seems to allow greater control of attention directed towards that stimuli, meaning the idea of awareness must have some sort of top-down influence on the visual cortex no?
This makes me wonder, is it higher layers of the cortex that actually generate the predictive model of awareness, or is the local regions that predictive the awareness concept due to feedback from the higher levels? I’m trying to construct some diagram in my head by which the brain models its own algorithm but I’m a bit confused I think.
You are not obliged to give any in-depth response to this I’ve just become interested especially due to the similarities between your model and Key’s and yet the serious potential ethical consequences of the differences.
Yeah that doesn’t mean much in itself: “Laminated and columnar” is how the neurons are arranged in space, but what matters algorithmically is how they’re connected. The bird pallium is neither laminated nor columnar, but is AFAICT functionally equivalent to a mammal cortex.
My opinion (which is outside the scope of this series) is: (1) mammals without a cortex are not conscious, and (2) mammals without a cortex show nociceptive behaviors, and (3) nociceptive behaviors are not in themselves proof of “feeling pain” in the sense of consciousness. Argument for (3): You can also make a very simple mechanical mechanism (e.g. a bimetallic strip attached to a mousetrap-type mechanism) that quickly “recoils” from touching hot surfaces, but it seems pretty implausible that this mechanical mechanism “feels pain”.
(I think we’re in agreement on this?)
~~
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
I just realised that quote didn’t meant what I thought it did. But yes I do understand this and Key seems to think the recurrent connections just aren’t strong (they are ‘diffusely interconnected’. but whether this means they have an intuitive self model or not honestly who knows, do you have any ideas of how you’d test it? maybe like Graziano does with attentional control?)
Oh yes definitely.
Heheh that’s alright I wasn’t expecting you too thanks for thinking about it for a moment anyway. I will simply have to learn myself.