Phenomenal consciousness (i.e., conscious self-awareness) is clearly not required for pain responses. Many more animals—and much simpler animals—exhibit pain responses, than plausibly possess phenomenal consciousness.
To be clear, I’m using the term phenomenal consciousness in the Nagel (1974) & Block (1995) sense that there is something it is like to be that system.
Your reply equates phenomenal consciousness with conscious self-awareness which is a stronger criterion to how I’m using it. To clarify what you mean by self-awareness could you clarify which definition you have in mind?
Body-schema self model—an embodied agent tracking the position and status of its limbs as it’s interacting with and moving about the world.
Counterfactual valence planning—e.g. the agent thinks “it will hurt”, “I’ll get food” etc.. when planning
Higher order thought—the agent entertains a meta-representation like “I am experiencing X”
Something else?
Octopuses qualify as self-aware under 1) and 2) from the paper I linked above—but no one claims they satisfy 3).
For what it’s worth, I tend away from the idea that 3) is required for phenomenal consciousness as I find Block’s arguments from phenomenal overflow compelling. But it’s a respected minority view in the philosophical community.
Phenomenal consciousness (i.e., conscious self-awareness) is clearly not required for pain responses. Many more animals—and much simpler animals—exhibit pain responses, than plausibly possess phenomenal consciousness.
To be clear, I’m using the term phenomenal consciousness in the Nagel (1974) & Block (1995) sense that there is something it is like to be that system.
Your reply equates phenomenal consciousness with conscious self-awareness which is a stronger criterion to how I’m using it. To clarify what you mean by self-awareness could you clarify which definition you have in mind?
Body-schema self model—an embodied agent tracking the position and status of its limbs as it’s interacting with and moving about the world.
Counterfactual valence planning—e.g. the agent thinks “it will hurt”, “I’ll get food” etc.. when planning
Higher order thought—the agent entertains a meta-representation like “I am experiencing X”
Something else?
Octopuses qualify as self-aware under 1) and 2) from the paper I linked above—but no one claims they satisfy 3).
For what it’s worth, I tend away from the idea that 3) is required for phenomenal consciousness as I find Block’s arguments from phenomenal overflow compelling. But it’s a respected minority view in the philosophical community.
Phenomenal consciousness not self awareness.