Like I said in this post, I think the contents of conscious awareness corresponds more-or-less to what’s happening in the cortex. The homolog to the cortex in non-mammal vertebrates is called the “pallium”, and the pallium along with the striatum and a few other odds and ends comprises the “telencephalon”.
I don’t know anything about octopuses, but I would very surprised if the fish pallium lacked recurrent connections. I don’t think your link says that though. The relevant part seems to be:
While the fish retina projects diffusely to nine nuclei in the diencephalon, its main target is the midbrain optic tectum (Burrill and Easter, 1994). Thus, the fish visual system is highly parcellated, at least, in the sub-telencephalonic regions. Whole brain imaging during visuomotor reflexes reveals widespread neural activity in the diencephalon, midbrain and hindbrain in zebrafish, but these regions appear to act mostly as feedforward pathways (Sarvestani et al., 2013; Kubo et al., 2014; Portugues et al., 2014). When recurrent feedback is present (e.g., in the brainstem circuitry responsible for eye movement), it is weak and usually arises only from the next nucleus within a linear hierarchical circuit (Joshua and Lisberger, 2014). In conclusion, fish lack the strong reciprocal and networked circuitry required for conscious neural processing.
This passage is just about the “sub-telencephalonic regions”, i.e. they’re not talking about the pallium.
To be clear, the stuff happening in sub-telencephalonic regions (e.g. the brainstem) is often relevant to consciousness, of course, even if it’s not itself part of consciousness. One reason is because stuff happening in the brainstem can turn into interoceptive sensory inputs to the pallium / cortex. Another reason is that stuff happening in the brainstem can directly mess with what’s happening in the pallium / cortex in other ways besides serving as sensory inputs. One example is (what I call) the valence signal which can make conscious thoughts either stay or go away. Another is (what I call) “involuntary attention”.
Umm, I would phrase it as: there’s a particular computational task called approximate Bayesian probabilistic inference, and I think the cortex / pallium performs that task (among others) in vertebrates, and I don’t think it’s possible for biological neurons to perform that task without lots of recurrent connections.
And if there’s an organism that doesn’t perform that task at all, then it would have neither an intuitive self-model nor an intuitive model of anything else, at least not in any sense that’s analogous to ours and that I know how to think about.
To be clear: (1) I think you can have some brain region with lots of recurrent connections that has nothing to do with intuitive modeling, (2) it’s possible for a brain region to perform approximate Bayesian probabilistic inference and have recurrent connections, but still not have an intuitive self-model, for example if the hypothesis space is closer to a simple lookup table rather than a complicated hypothesis space involving complex compositional interacting entities etc.
why do you think some invertebrates likely have intuitive self models as well?
I didn’t quite say that. I made a weaker claim that “presumably many invertebrates [are] active agents with predictive learning algorithms in their brain, and hence their predictive learning algorithms are…incentivized to build intuitive self-models”.
It seems reasonable to presume that octopuses have predictive learning algorithms in their nervous systems, because AFAIK that’s the only practical way to wind up with a flexible and forward-looking understanding of the consequences of your actions, and octopuses (at least) are clearly able to plan ahead in a flexible way.
However, “incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
Would you restrict this possibility to basically just cephalopods and the like
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry
Yeah that doesn’t mean much in itself: “Laminated and columnar” is how the neurons are arranged in space, but what matters algorithmically is how they’re connected. The bird pallium is neither laminated nor columnar, but is AFAICT functionally equivalent to a mammal cortex.
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
My opinion (which is outside the scope of this series) is: (1) mammals without a cortex are not conscious, and (2) mammals without a cortex show nociceptive behaviors, and (3) nociceptive behaviors are not in themselves proof of “feeling pain” in the sense of consciousness. Argument for (3): You can also make a very simple mechanical mechanism (e.g. a bimetallic strip attached to a mousetrap-type mechanism) that quickly “recoils” from touching hot surfaces, but it seems pretty implausible that this mechanical mechanism “feels pain”.
(I think we’re in agreement on this?)
~~
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
Like I said in this post, I think the contents of conscious awareness corresponds more-or-less to what’s happening in the cortex. The homolog to the cortex in non-mammal vertebrates is called the “pallium”, and the pallium along with the striatum and a few other odds and ends comprises the “telencephalon”.
I don’t know anything about octopuses, but I would very surprised if the fish pallium lacked recurrent connections. I don’t think your link says that though. The relevant part seems to be:
This passage is just about the “sub-telencephalonic regions”, i.e. they’re not talking about the pallium.
To be clear, the stuff happening in sub-telencephalonic regions (e.g. the brainstem) is often relevant to consciousness, of course, even if it’s not itself part of consciousness. One reason is because stuff happening in the brainstem can turn into interoceptive sensory inputs to the pallium / cortex. Another reason is that stuff happening in the brainstem can directly mess with what’s happening in the pallium / cortex in other ways besides serving as sensory inputs. One example is (what I call) the valence signal which can make conscious thoughts either stay or go away. Another is (what I call) “involuntary attention”.
Umm, I would phrase it as: there’s a particular computational task called approximate Bayesian probabilistic inference, and I think the cortex / pallium performs that task (among others) in vertebrates, and I don’t think it’s possible for biological neurons to perform that task without lots of recurrent connections.
And if there’s an organism that doesn’t perform that task at all, then it would have neither an intuitive self-model nor an intuitive model of anything else, at least not in any sense that’s analogous to ours and that I know how to think about.
To be clear: (1) I think you can have some brain region with lots of recurrent connections that has nothing to do with intuitive modeling, (2) it’s possible for a brain region to perform approximate Bayesian probabilistic inference and have recurrent connections, but still not have an intuitive self-model, for example if the hypothesis space is closer to a simple lookup table rather than a complicated hypothesis space involving complex compositional interacting entities etc.
I didn’t quite say that. I made a weaker claim that “presumably many invertebrates [are] active agents with predictive learning algorithms in their brain, and hence their predictive learning algorithms are…incentivized to build intuitive self-models”.
It seems reasonable to presume that octopuses have predictive learning algorithms in their nervous systems, because AFAIK that’s the only practical way to wind up with a flexible and forward-looking understanding of the consequences of your actions, and octopuses (at least) are clearly able to plan ahead in a flexible way.
However, “incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
No opinion.
Yeah that doesn’t mean much in itself: “Laminated and columnar” is how the neurons are arranged in space, but what matters algorithmically is how they’re connected. The bird pallium is neither laminated nor columnar, but is AFAICT functionally equivalent to a mammal cortex.
My opinion (which is outside the scope of this series) is: (1) mammals without a cortex are not conscious, and (2) mammals without a cortex show nociceptive behaviors, and (3) nociceptive behaviors are not in themselves proof of “feeling pain” in the sense of consciousness. Argument for (3): You can also make a very simple mechanical mechanism (e.g. a bimetallic strip attached to a mousetrap-type mechanism) that quickly “recoils” from touching hot surfaces, but it seems pretty implausible that this mechanical mechanism “feels pain”.
(I think we’re in agreement on this?)
~~
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.