Interesting post! I have a couple of questions to help clarify the position:
1. There’s a growing body of evidence e.g. this paper that creatures like octopuses show behavioural evidence for an affective pain-like response. How would you account for this? Would you say they’re not really feeling pain in a phenomenal consciousness sense?
2. I could imagine an LLM-like system passing the threshold for the use-mention distinction in the post.(although maybe this would depend on how “hidden” the socially damning thoughts are e.g. if it writes out damning thoughts in its CoT but not in its final response does this count?) Would your model treat the LLM-like system as conscious? Or would it need additional features?
Phenomenal consciousness (i.e., conscious self-awareness) is clearly not required for pain responses. Many more animals—and much simpler animals—exhibit pain responses, than plausibly possess phenomenal consciousness.
To be clear, I’m using the term phenomenal consciousness in the Nagel (1974) & Block (1995) sense that there is something it is like to be that system.
Your reply equates phenomenal consciousness with conscious self-awareness which is a stronger criterion to how I’m using it. To clarify what you mean by self-awareness could you clarify which definition you have in mind?
Body-schema self model—an embodied agent tracking the position and status of its limbs as it’s interacting with and moving about the world.
Counterfactual valence planning—e.g. the agent thinks “it will hurt”, “I’ll get food” etc.. when planning
Higher order thought—the agent entertains a meta-representation like “I am experiencing X”
Something else?
Octopuses qualify as self-aware under 1) and 2) from the paper I linked above—but no one claims they satisfy 3).
For what it’s worth, I tend away from the idea that 3) is required for phenomenal consciousness as I find Block’s arguments from phenomenal overflow compelling. But it’s a respected minority view in the philosophical community.
I mean, I think it’s like when Opus says it has emotions. I don’t think it “has emotions” in the way we mean that when talking to each other. I don’t think the sense in which this [ the potential lack of subjective experience ] can be true of animals is intuitive for most people to grasp. But I don’t think “affective pain-like response in octopuses in specific” is particularly compelling evidence for consciousness over, just, like, the fact that nonhuman animals seem to pursue things and react ~affectively to stimuli. I’m a bit puzzled why you would reference a specific study on octopuses, honestly, when cats and squirrels cry out all the time in what appears obviously-to-humans to be pain or anger.
Like with any other creature, you could just do some kind of mirror test. Unfortunately I have to refrain from constructing one I think would work on LLMs because people exist right now who would have the first-order desire and possibly the resources to just deliberately try and build an LLM that would pass it. Not because they would actually need their LLM to have any particular capabilities that would come with consciousness, but because it would be great for usership/sales/funding if they could say “Ooh, we extra super built the Torment Nexus!”
Ok interesting, I think this substantially clarifies your position.
I’m a bit puzzled why you would reference a specific study on octopuses, honestly, when cats and squirrels cry out all the time in what appears obviously-to-humans to be pain or anger.
Two reasons:
It just happened to be a paper I was familiar with, and;
I didn’t fully appreciate how willing you’d be to run the argument for animals more similar to humans like cats or squirrels. In retrospect, this is pretty clearly implied by your post and the link from EY you posted for context. My bad!
I don’t think it “has emotions” in the way we mean that when talking to each other.
I grant that animals have substantially different neurological structure to humans. But I don’t think this implies that what’s happening when they’re screaming or reacting to averse stimuli is so foreign we wouldn’t even recognise it as pain and I really don’t think this implies that there’s an absence of phenomenal experience.
Consider a frog snapping its tongue at an object it thinks is a fly. It obviously has a different meaning for [fly] than humans have—a human would never try to eat the fly! But I’d argue the concept of a fly as [food] for the frog overlaps with the concept of [food] for the human. We’re both eating through our mouths, eating to maintain nutrition, normal bodily functioning, because we get hungry etc… the presence of all these evolutionary selected functions are what it means for the system to consider something as [food] or to consider itself [hungry]. Just as the implementation of a negatively affective valenced response, even if different in its specific profile in each animal, is closely related enough for us to call it [pain].
In the study I linked the octopus is:
Recalling the episode where they were exposed to averse stimuli.
Binding it to a spatial context e.g. a particular chamber where it occurred
Evaluating analgesic states as intrinsically good
If the functional profile of pain is replicated—what grounds do we have to say the animals are not actually experiencing pain phenomenally?
I think where we fundamentally differ is on what level of self-modeling is required for phenomenal experience. I find it plausible that some “inner-listener” might be required for experiences to register phenomenally, but I don’t think the level of self-modelling required is so sophisticated. Consider that animals navigating their environment must have some simple self-model—to coordinate limbs, avoid obstacles etc.. These require representing [self] vs [world] and tracking what’s good or bad for me.
All this said, I really liked the post. I think the use-mention distinction is interesting and a pretty good candidate for why sophisticated self-modelling evolved in humans. I’m just not convinced on the link to phenomenal consciousness.
Interesting post! I have a couple of questions to help clarify the position:
1. There’s a growing body of evidence e.g. this paper that creatures like octopuses show behavioural evidence for an affective pain-like response. How would you account for this? Would you say they’re not really feeling pain in a phenomenal consciousness sense?
2. I could imagine an LLM-like system passing the threshold for the use-mention distinction in the post.(although maybe this would depend on how “hidden” the socially damning thoughts are e.g. if it writes out damning thoughts in its CoT but not in its final response does this count?) Would your model treat the LLM-like system as conscious? Or would it need additional features?
Phenomenal consciousness (i.e., conscious self-awareness) is clearly not required for pain responses. Many more animals—and much simpler animals—exhibit pain responses, than plausibly possess phenomenal consciousness.
To be clear, I’m using the term phenomenal consciousness in the Nagel (1974) & Block (1995) sense that there is something it is like to be that system.
Your reply equates phenomenal consciousness with conscious self-awareness which is a stronger criterion to how I’m using it. To clarify what you mean by self-awareness could you clarify which definition you have in mind?
Body-schema self model—an embodied agent tracking the position and status of its limbs as it’s interacting with and moving about the world.
Counterfactual valence planning—e.g. the agent thinks “it will hurt”, “I’ll get food” etc.. when planning
Higher order thought—the agent entertains a meta-representation like “I am experiencing X”
Something else?
Octopuses qualify as self-aware under 1) and 2) from the paper I linked above—but no one claims they satisfy 3).
For what it’s worth, I tend away from the idea that 3) is required for phenomenal consciousness as I find Block’s arguments from phenomenal overflow compelling. But it’s a respected minority view in the philosophical community.
Phenomenal consciousness not self awareness.
I mean, I think it’s like when Opus says it has emotions. I don’t think it “has emotions” in the way we mean that when talking to each other. I don’t think the sense in which this [ the potential lack of subjective experience ] can be true of animals is intuitive for most people to grasp. But I don’t think “affective pain-like response in octopuses in specific” is particularly compelling evidence for consciousness over, just, like, the fact that nonhuman animals seem to pursue things and react ~affectively to stimuli. I’m a bit puzzled why you would reference a specific study on octopuses, honestly, when cats and squirrels cry out all the time in what appears obviously-to-humans to be pain or anger.
Like with any other creature, you could just do some kind of mirror test. Unfortunately I have to refrain from constructing one I think would work on LLMs because people exist right now who would have the first-order desire and possibly the resources to just deliberately try and build an LLM that would pass it. Not because they would actually need their LLM to have any particular capabilities that would come with consciousness, but because it would be great for usership/sales/funding if they could say “Ooh, we extra super built the Torment Nexus!”
Ok interesting, I think this substantially clarifies your position.
Two reasons:
It just happened to be a paper I was familiar with, and;
I didn’t fully appreciate how willing you’d be to run the argument for animals more similar to humans like cats or squirrels. In retrospect, this is pretty clearly implied by your post and the link from EY you posted for context. My bad!
I grant that animals have substantially different neurological structure to humans. But I don’t think this implies that what’s happening when they’re screaming or reacting to averse stimuli is so foreign we wouldn’t even recognise it as pain and I really don’t think this implies that there’s an absence of phenomenal experience.
Consider a frog snapping its tongue at an object it thinks is a fly. It obviously has a different meaning for [fly] than humans have—a human would never try to eat the fly! But I’d argue the concept of a fly as [food] for the frog overlaps with the concept of [food] for the human. We’re both eating through our mouths, eating to maintain nutrition, normal bodily functioning, because we get hungry etc… the presence of all these evolutionary selected functions are what it means for the system to consider something as [food] or to consider itself [hungry]. Just as the implementation of a negatively affective valenced response, even if different in its specific profile in each animal, is closely related enough for us to call it [pain].
In the study I linked the octopus is:
Recalling the episode where they were exposed to averse stimuli.
Binding it to a spatial context e.g. a particular chamber where it occurred
Evaluating analgesic states as intrinsically good
If the functional profile of pain is replicated—what grounds do we have to say the animals are not actually experiencing pain phenomenally?
I think where we fundamentally differ is on what level of self-modeling is required for phenomenal experience. I find it plausible that some “inner-listener” might be required for experiences to register phenomenally, but I don’t think the level of self-modelling required is so sophisticated. Consider that animals navigating their environment must have some simple self-model—to coordinate limbs, avoid obstacles etc.. These require representing [self] vs [world] and tracking what’s good or bad for me.
All this said, I really liked the post. I think the use-mention distinction is interesting and a pretty good candidate for why sophisticated self-modelling evolved in humans. I’m just not convinced on the link to phenomenal consciousness.