Have you tried this on humans? humans with dementia? very young humans? Humans who speak a different language? How about Chimpanzees or whales? I’d love to see the truth table of different entities you’ve tested, how they did on the test, and how you perceive their consciousness.
I think we need to start out by biting some pretty painful bullets: 1) “consciousness” is not operationally defined, and has different meanings in different contexts. Sometimes intentionally, as a bait-and-switch to make moral or empathy arguments rather than more objective uses. 2) For any given usage of a concept of consciousness, there are going to be variations among humans. If we don’t acknowledge or believe there ARE differences among humans, then it’s either not a real measure or it’s so coarse that a lot of non-humans already qualify. 3) if you DO come up with a test of some conceptions of consciousness, it won’t change anyone’s attitudes, because that’s not the dimension they say they care about. 3b) Most people don’t actually care about any metric or concrete demonstration of consciousness, they care about their intuition-level empathy, which is far more based on “like me in identifiable ways” than on anything that can be measured and contradicted.
Just clarifying something important: Schneider’s ACT is proposed as a sufficient test of consciousness not a necessary test. So the fact that young children, dementia patients, animals etc… would fail the test isn’t a problem for the argument. It just says that these entities experience consciousness for other reasons or in other ways than regular functioning adults.
I agree with your points around multiple meanings of consciousness and potential equivocation and the gap between evidence and “intuition.”
Importantly, the claim here is around phenomenal consciousness I.e the subjective experience and “what it feels like” to be conscious rather than just cognitive consciousness. So if we entertain seriously the idea that AI systems indeed have phenomenal consciousness this does actually have serious ethical implications regardless of our own intuitive emotional response.
Have you tried this on humans? humans with dementia? very young humans? Humans who speak a different language? How about Chimpanzees or whales? I’d love to see the truth table of different entities you’ve tested, how they did on the test, and how you perceive their consciousness.
I think we need to start out by biting some pretty painful bullets:
1) “consciousness” is not operationally defined, and has different meanings in different contexts. Sometimes intentionally, as a bait-and-switch to make moral or empathy arguments rather than more objective uses.
2) For any given usage of a concept of consciousness, there are going to be variations among humans. If we don’t acknowledge or believe there ARE differences among humans, then it’s either not a real measure or it’s so coarse that a lot of non-humans already qualify.
3) if you DO come up with a test of some conceptions of consciousness, it won’t change anyone’s attitudes, because that’s not the dimension they say they care about.
3b) Most people don’t actually care about any metric or concrete demonstration of consciousness, they care about their intuition-level empathy, which is far more based on “like me in identifiable ways” than on anything that can be measured and contradicted.
Just clarifying something important: Schneider’s ACT is proposed as a sufficient test of consciousness not a necessary test. So the fact that young children, dementia patients, animals etc… would fail the test isn’t a problem for the argument. It just says that these entities experience consciousness for other reasons or in other ways than regular functioning adults.
I agree with your points around multiple meanings of consciousness and potential equivocation and the gap between evidence and “intuition.”
Importantly, the claim here is around phenomenal consciousness I.e the subjective experience and “what it feels like” to be conscious rather than just cognitive consciousness. So if we entertain seriously the idea that AI systems indeed have phenomenal consciousness this does actually have serious ethical implications regardless of our own intuitive emotional response.