Just clarifying something important: Schneider’s ACT is proposed as a sufficient test of consciousness not a necessary test. So the fact that young children, dementia patients, animals etc… would fail the test isn’t a problem for the argument. It just says that these entities experience consciousness for other reasons or in other ways than regular functioning adults.
I agree with your points around multiple meanings of consciousness and potential equivocation and the gap between evidence and “intuition.”
Importantly, the claim here is around phenomenal consciousness I.e the subjective experience and “what it feels like” to be conscious rather than just cognitive consciousness. So if we entertain seriously the idea that AI systems indeed have phenomenal consciousness this does actually have serious ethical implications regardless of our own intuitive emotional response.
Just clarifying something important: Schneider’s ACT is proposed as a sufficient test of consciousness not a necessary test. So the fact that young children, dementia patients, animals etc… would fail the test isn’t a problem for the argument. It just says that these entities experience consciousness for other reasons or in other ways than regular functioning adults.
I agree with your points around multiple meanings of consciousness and potential equivocation and the gap between evidence and “intuition.”
Importantly, the claim here is around phenomenal consciousness I.e the subjective experience and “what it feels like” to be conscious rather than just cognitive consciousness. So if we entertain seriously the idea that AI systems indeed have phenomenal consciousness this does actually have serious ethical implications regardless of our own intuitive emotional response.