I personally think it’s useful to keep metacognition and consciousness separate as far as concepts go. This is generally the approach in philosophy of mind (e.g., Searle, Nagel, Chalmers). Blending the concepts obfuscates what’s interesting about metacognition and what’s interesting about consciousness.
So in my view, AI clearly excels at metacognition, but it’s an open question whether it’s conscious. Human babies are very likely conscious, but lack any metacognition.
Consciousness is useful apart from metacognition because consciousness is, by my account, a required feature of moral consideration. It’s a prerequisite to the qualia that is “pain”. Since I think animals and babies are conscious and can feel pain, they automatically receive moral consideration in my book.
Testing for consciousness is a fraught and likely impossible task, but I don’t think that means we shouldn’t have a word for it or that we should intermingle it with the concept of metacognition. AI very well may be conscious and perform metacognition, or it may be unconscious and yet still perform metacognition.
One should never refer to consciousness∖metacognition as ‘consciousness’ because of how strong the conflation of those things are under the common sense of the word, and I believe that there isn’t a need to salvage ‘consciousness’, we have better words for this exclusion, you can call it observer-measure, indexical prior, subjectivity, or experiencingness. Philosophers will recognise ‘subjectivity’ or ‘experiencingness’, but the other names are new, they are a result of the bayesian school being much better at thinking about this kind of thing than academic philosophers have been to the extent that I don’t see a reason to sacrifice clarity to stay in dialog with them.
I misunderstood your original point, and I am completely fine with using words like “subjectivity” and “experiencingness” for the sake of clarity. Perhaps those words should be used in the quiz if the original poster intended to use that definition. The original poster was frustrated by the lack of clarity in consciousness discussions, and I think definitions are (partially) to blame.
I personally think it’s useful to keep metacognition and consciousness separate as far as concepts go. This is generally the approach in philosophy of mind (e.g., Searle, Nagel, Chalmers). Blending the concepts obfuscates what’s interesting about metacognition and what’s interesting about consciousness.
So in my view, AI clearly excels at metacognition, but it’s an open question whether it’s conscious. Human babies are very likely conscious, but lack any metacognition.
Consciousness is useful apart from metacognition because consciousness is, by my account, a required feature of moral consideration. It’s a prerequisite to the qualia that is “pain”. Since I think animals and babies are conscious and can feel pain, they automatically receive moral consideration in my book.
Testing for consciousness is a fraught and likely impossible task, but I don’t think that means we shouldn’t have a word for it or that we should intermingle it with the concept of metacognition. AI very well may be conscious and perform metacognition, or it may be unconscious and yet still perform metacognition.
One should never refer to consciousness∖metacognition as ‘consciousness’ because of how strong the conflation of those things are under the common sense of the word, and I believe that there isn’t a need to salvage ‘consciousness’, we have better words for this exclusion, you can call it observer-measure, indexical prior, subjectivity, or experiencingness. Philosophers will recognise ‘subjectivity’ or ‘experiencingness’, but the other names are new, they are a result of the bayesian school being much better at thinking about this kind of thing than academic philosophers have been to the extent that I don’t see a reason to sacrifice clarity to stay in dialog with them.
I misunderstood your original point, and I am completely fine with using words like “subjectivity” and “experiencingness” for the sake of clarity. Perhaps those words should be used in the quiz if the original poster intended to use that definition. The original poster was frustrated by the lack of clarity in consciousness discussions, and I think definitions are (partially) to blame.