Or no sorry I’ve gone back over the papers and I’m still a bit confused.
Brian Key seems to specifically claim fish and octopuses cannot feel pain in reference to the recurrent connections of their pallium (+ the octopus equivalent which seems to be the supraesophageal complex).
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry [...] Although the medial pallium is weakly homologous to the mammalian amygdala, these structures principally possess feedforward circuits that execute nociceptive defensive behaviours
However he then also claims
This conclusion is supported by lesion studies that have shown that neither the medial pallium nor the whole pallium is required for escape behaviours from electric shock stimuli in fish (Portavella et al., 2004). Therefore, given that the pallium is not even involved in nociceptive behaviours, it could not be inferred that it plays a role in pain.
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
Which makes me think his claim (in regards to fish consciousness at least) is really just that the feedback circuitry required for the brain to make predictions on its own algorithm (and thus become subjectively aware) just isn’t strong enough / is too minimal? He does source a pretty vast amount of information to try and justify this, so much I haven’t meaningfully made a start on it yet, its pretty overwhelming. Overall I just feel more uncertain.
I’ve gone back over his paper on octopuses with my increased understanding and he specifically seems to make reference to a lack of feedback connections between lobes (not just subesophageal lobes). Specifically he seems to focus on the fact that the posterior buccal lobe (which is supraesophageal) has ‘no second-order sensory fibres (that) subsequently project from the brachial lobe to the inferior frontal system’ meaning ‘it lacks the ability to feedback prediction errors to these lobes so as to regulate their models’. I honestly don’t know if this places doubt on the ability of octopuses to make intuitive self-models or not in your theory, since I suppose it would depend on the information contained in the posterior buccal lobe, and what recurrent connections exist between the other supraesophageal lobes. Figure 2 has a wiring diagram for a system in the supraesophageal brain as well as a general overview of brain circuity in processing noxious stimuli in figure 7, I would be very interested in trying to understand through what connection the octopus (or even the fish) could plausibly gain information it could use to make predictions about its own algorithm.
I might be totally off on this, but to make a prediction about something like a state of attention / cortical seriality would surely require feedback connections from the output higher level areas where the algorithm is doing ‘thinking’ to earlier layers, given that for one awareness of a stimuli seems to allow greater control of attention directed towards that stimuli, meaning the idea of awareness must have some sort of top-down influence on the visual cortex no?
This makes me wonder, is it higher layers of the cortex that actually generate the predictive model of awareness, or is the local regions that predictive the awareness concept due to feedback from the higher levels? I’m trying to construct some diagram in my head by which the brain models its own algorithm but I’m a bit confused I think.
You are not obliged to give any in-depth response to this I’ve just become interested especially due to the similarities between your model and Key’s and yet the serious potential ethical consequences of the differences.
fish also lack a laminated and columnar organization of neural regions that are strongly interconnected by reciprocal feedforward and feedback circuitry
Yeah that doesn’t mean much in itself: “Laminated and columnar” is how the neurons are arranged in space, but what matters algorithmically is how they’re connected. The bird pallium is neither laminated nor columnar, but is AFAICT functionally equivalent to a mammal cortex.
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
My opinion (which is outside the scope of this series) is: (1) mammals without a cortex are not conscious, and (2) mammals without a cortex show nociceptive behaviors, and (3) nociceptive behaviors are not in themselves proof of “feeling pain” in the sense of consciousness. Argument for (3): You can also make a very simple mechanical mechanism (e.g. a bimetallic strip attached to a mousetrap-type mechanism) that quickly “recoils” from touching hot surfaces, but it seems pretty implausible that this mechanical mechanism “feels pain”.
(I think we’re in agreement on this?)
~~
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
what matters algorithmically is how they’re connected
I just realised that quote didn’t meant what I thought it did. But yes I do understand this and Key seems to think the recurrent connections just aren’t strong (they are ‘diffusely interconnected’. but whether this means they have an intuitive self model or not honestly who knows, do you have any ideas of how you’d test it? maybe like Graziano does with attentional control?)
(I think we’re in agreement on this?)
Oh yes definitely.
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
Heheh that’s alright I wasn’t expecting you too thanks for thinking about it for a moment anyway. I will simply have to learn myself.
Or no sorry I’ve gone back over the papers and I’m still a bit confused.
Brian Key seems to specifically claim fish and octopuses cannot feel pain in reference to the recurrent connections of their pallium (+ the octopus equivalent which seems to be the supraesophageal complex).
However he then also claims
Which seems a little silly for me because I’m fairly certain humans without a cortex also show nociceptive behaviours?
Which makes me think his claim (in regards to fish consciousness at least) is really just that the feedback circuitry required for the brain to make predictions on its own algorithm (and thus become subjectively aware) just isn’t strong enough / is too minimal? He does source a pretty vast amount of information to try and justify this, so much I haven’t meaningfully made a start on it yet, its pretty overwhelming. Overall I just feel more uncertain.
I’ve gone back over his paper on octopuses with my increased understanding and he specifically seems to make reference to a lack of feedback connections between lobes (not just subesophageal lobes). Specifically he seems to focus on the fact that the posterior buccal lobe (which is supraesophageal) has ‘no second-order sensory fibres (that) subsequently project from the brachial lobe to the inferior frontal system’ meaning ‘it lacks the ability to feedback prediction errors to these lobes so as to regulate their models’. I honestly don’t know if this places doubt on the ability of octopuses to make intuitive self-models or not in your theory, since I suppose it would depend on the information contained in the posterior buccal lobe, and what recurrent connections exist between the other supraesophageal lobes. Figure 2 has a wiring diagram for a system in the supraesophageal brain as well as a general overview of brain circuity in processing noxious stimuli in figure 7, I would be very interested in trying to understand through what connection the octopus (or even the fish) could plausibly gain information it could use to make predictions about its own algorithm.
I might be totally off on this, but to make a prediction about something like a state of attention / cortical seriality would surely require feedback connections from the output higher level areas where the algorithm is doing ‘thinking’ to earlier layers, given that for one awareness of a stimuli seems to allow greater control of attention directed towards that stimuli, meaning the idea of awareness must have some sort of top-down influence on the visual cortex no?
This makes me wonder, is it higher layers of the cortex that actually generate the predictive model of awareness, or is the local regions that predictive the awareness concept due to feedback from the higher levels? I’m trying to construct some diagram in my head by which the brain models its own algorithm but I’m a bit confused I think.
You are not obliged to give any in-depth response to this I’ve just become interested especially due to the similarities between your model and Key’s and yet the serious potential ethical consequences of the differences.
Yeah that doesn’t mean much in itself: “Laminated and columnar” is how the neurons are arranged in space, but what matters algorithmically is how they’re connected. The bird pallium is neither laminated nor columnar, but is AFAICT functionally equivalent to a mammal cortex.
My opinion (which is outside the scope of this series) is: (1) mammals without a cortex are not conscious, and (2) mammals without a cortex show nociceptive behaviors, and (3) nociceptive behaviors are not in themselves proof of “feeling pain” in the sense of consciousness. Argument for (3): You can also make a very simple mechanical mechanism (e.g. a bimetallic strip attached to a mousetrap-type mechanism) that quickly “recoils” from touching hot surfaces, but it seems pretty implausible that this mechanical mechanism “feels pain”.
(I think we’re in agreement on this?)
~~
I know nothing about octopus nervous systems and am not currently planning to learn, sorry.
I just realised that quote didn’t meant what I thought it did. But yes I do understand this and Key seems to think the recurrent connections just aren’t strong (they are ‘diffusely interconnected’. but whether this means they have an intuitive self model or not honestly who knows, do you have any ideas of how you’d test it? maybe like Graziano does with attentional control?)
Oh yes definitely.
Heheh that’s alright I wasn’t expecting you too thanks for thinking about it for a moment anyway. I will simply have to learn myself.