Well this does help to explain a lot of why so many people make such vague and weird claims about consciousness. It would be interesting to get a more finely grained analysis done though…
- What is your naive definition of consciousness? (When someone asks “how do you explain consciousness” what is the thing you start trying to explain, ie, qualia? People saying they feel things?)
- What is your gears model of what consciousness is (how in fact, do you explain the thing you pointed at in your first question?)
- What aspects of consciousness do you value? (Are the things that make a person a moral subject part of your naive definition, your gears definition, or outside of your definition of consciousness entirely?)
Personally, I mostly refer to consciousness as qualia- my hunch is that qualia emerge from certain structures as a sort of mathematical phenomena- but I’m not committed to any one gears model of how that works, and as a result, I don’t really value consciousness at all.
I instead value some of the gears that may or may not necessitate consciousness, such as self awareness and agentic coherence, but whether or not GPT-4 has qualia is not directly relevant to me- to whether it is a moral subject. P-zombies can be moral subjects in my book, unless of course, the things I actually value cause qualia- but knowing this is unnecessary to my determination of moral subjecthood.
It seems… very concerning to me to base moral subjecthood on something subject to the unfalsifiable formulation of the philosophical problem of P-zombies.
CloudHeadedTranshumanist
Karma: 0
Depends on whether you think qualia are separable from awareness. You need contextual awareness to be intelligent you can’t really optimize it away.
… also as an aside if they are separable and LLMs actually aren’t qualiaful… then this only proves that I actually value qualia way less than I thought I did and a Disneyland without children would be fine actually.