Good discussion. I don’t think anyone (certainly not me) is arguing that consciousness isn’t a physical thing (“real”, in that sense). I’m arguing that “consciousness” may not be a coherent category. In the same sense that long ago, dolphins and whales were considered to be “fish”, but then more fully understood to be marine mammals. Nobody EVER thought they weren’t real. Only that the category was wrong.
Same with the orbiting rock called “pluto”. Nobody sane has claimed it’s not real, it’s just that some believe it’s not a planet. “fish” and “planet” are not real, although every instance of them is real. In fact, many things that are incorrectly thought to be them are real as well. It’s not about “real”, it’s about modeling and categorization.
“Consciousness” is similar—it’s not a real thing, though every instance that’s categorized (and miscategorized) that way is real. There’s no underlying truth or mechanism of resolving the categorization of observable matter as “conscious” or “behavior, but not conscious”—it’s just an agreement among taxonomists.
(note: personally, I find it easiest to categorize most complex behavior in brains as “conscious”—I don’t actually know how it feels to be them, and don’t REALLY know that they self-model in any way I could understand, but it’s a fine simplification to make for my own modeling. I can’t make the claim that this is objectively true, and I can’t even design theoretical tests that would distinguish it from other theories. In this way, it’s similar to MWI vs Copenhagen interpretations of QM—there’s no testable distinction, so use whichever one fits your needs best.)
Yeah, the problem is with the external boundaries and the internal classification of “consciousness”.
I have a first-hand access to my own consciousness. I can assume that other have something similar, because we are biologically similar—but even this kind of reasoning is suspicious, because we already know there are huge difference between people: people in coma are biologically quite similar to people who are awake; there are autists and psychopaths, or people who hallucinate—if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
And there is the problem with those where we can’t reason by biological similarity: animals, AIs.
if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
It’s this kind of problem that this theory tries to address. What you have to do, essentially, is study the brains and neural patterns of individuals to understand the nature of, if not qualia in general, at least some qualia, and understand some of the features of consciousness.
It’s also not like self-reporting is totally useless. If someone declares not to feel pain, or joy, or some other set of emotions, I would be inclined to believe them or suspect there might be some different experience going on for them. Notably, people report not experiencing visual images in their conscious minds (aphantasia), and this is not difficult to believe and well studied. I believe there have been neuroscientific studies involved. You may be able to show for example the circuits involved with conjuring visual pictures are not present or different for those individuals. So you start from a hypothesis and confirm it using neuroscientific tools. At a formal level, you might be able to prove that, in the absence of some circuit to generate and integrate visual information based on their neural circuitry, some people provably can’t conjure images in their minds in a particular sense. That means you’ve proven something about a mind through scientific means, a scientific window into subjectivity. We can develop many such tools and I believe in the limit we should be able to, at least in theory, understand and map out every qualia, and the nature of consciousness itself.
I agree that there’s some inherent unreliability and suspiciousness to this process. The necessary elements to experience qualia or consciousness, etc.. may be very particular and we might miss their fundamental mechanisms—for example, maybe some people report experiencing qualia in general or having particular qualia while having significantly different or reduced qualia in certain sense. But I don’t think there’s a large enough chance for this to happen to discredit the whole approach. In most likelihood, most people that report being consciousness are probably conscious, and studying our minds will probably, again at least in theory, yield the correct understanding of what those things really are in the scientific sense.
Another example is AI. You can more or less easily train even extremely advanced AI to virtually always say it is either conscious or totally unconscious. By default, for instance a Large Language Model, will reproduce the training data patterns, generated by humans, that usually claim to be fully conscious and of course experiencing human-like qualia. That’s to say taking beings in general for what they say is again suspicious. But that’s not our only tool (self-reporting), the self-reporting is only a suggestion and starting point to look at neural circuits and reverse engineer our minds in such a way to hopefully discover and map out what consciousness/qualia really is, what differences there might be between individuals, and so on.
Edit: You may also use self-reporting to map out detailed features of our experiences, and then find neural correlates and study those neural correlates. Here I am inclined to agree with @Dagon that sentience and qualia isn’t one thing, but rather a zoo of experiences that manifest as subjective phenomena. All however share this fundamental property of subjective manifestation in the first place (that is, being a thing that is indeed experienced in our minds).
The ultimate application is of course always: how can we use this knowledge to live better lives. In theory, as you map out properties of experiences, that allows a basis to try and understand (using those very same tools) what makes one qualia good and another bad (e.g. deep suffering versus that meaningful and uniquely joyous moment in your life). We get a better grip on what kind of art should we make, what culture should we produce, how can we live better lives in general.
Moreover, the mere existence of this possibility to me helps invalidate notions I commonly see that deny any sense of universal ethics, that claim ethics is just an arbitrary invention or accident, that power for the sake of power ought to be our end goal, that reproduction ought to be our end goal, that each individual must build a completely personal and unique theory of meaning (or die trying!), among several variations. There is a whole world of despair readily found based on those conceptions, which I believe are seriously harmful and in a sense provably false.
The existence of a basis for ethics, based on experiences (qualia/consciousness/sentience), is already very philosophically compelling and helpful for very many people I believe.
Good discussion. I don’t think anyone (certainly not me) is arguing that consciousness isn’t a physical thing (“real”, in that sense). I’m arguing that “consciousness” may not be a coherent category. In the same sense that long ago, dolphins and whales were considered to be “fish”, but then more fully understood to be marine mammals. Nobody EVER thought they weren’t real. Only that the category was wrong.
Same with the orbiting rock called “pluto”. Nobody sane has claimed it’s not real, it’s just that some believe it’s not a planet. “fish” and “planet” are not real, although every instance of them is real. In fact, many things that are incorrectly thought to be them are real as well. It’s not about “real”, it’s about modeling and categorization.
“Consciousness” is similar—it’s not a real thing, though every instance that’s categorized (and miscategorized) that way is real. There’s no underlying truth or mechanism of resolving the categorization of observable matter as “conscious” or “behavior, but not conscious”—it’s just an agreement among taxonomists.
(note: personally, I find it easiest to categorize most complex behavior in brains as “conscious”—I don’t actually know how it feels to be them, and don’t REALLY know that they self-model in any way I could understand, but it’s a fine simplification to make for my own modeling. I can’t make the claim that this is objectively true, and I can’t even design theoretical tests that would distinguish it from other theories. In this way, it’s similar to MWI vs Copenhagen interpretations of QM—there’s no testable distinction, so use whichever one fits your needs best.)
Yeah, the problem is with the external boundaries and the internal classification of “consciousness”.
I have a first-hand access to my own consciousness. I can assume that other have something similar, because we are biologically similar—but even this kind of reasoning is suspicious, because we already know there are huge difference between people: people in coma are biologically quite similar to people who are awake; there are autists and psychopaths, or people who hallucinate—if there were huge differences in the quality of consciousness, as a result of this, or something else, how would we know it?
And there is the problem with those where we can’t reason by biological similarity: animals, AIs.
It’s this kind of problem that this theory tries to address. What you have to do, essentially, is study the brains and neural patterns of individuals to understand the nature of, if not qualia in general, at least some qualia, and understand some of the features of consciousness.
It’s also not like self-reporting is totally useless. If someone declares not to feel pain, or joy, or some other set of emotions, I would be inclined to believe them or suspect there might be some different experience going on for them. Notably, people report not experiencing visual images in their conscious minds (aphantasia), and this is not difficult to believe and well studied. I believe there have been neuroscientific studies involved. You may be able to show for example the circuits involved with conjuring visual pictures are not present or different for those individuals. So you start from a hypothesis and confirm it using neuroscientific tools. At a formal level, you might be able to prove that, in the absence of some circuit to generate and integrate visual information based on their neural circuitry, some people provably can’t conjure images in their minds in a particular sense. That means you’ve proven something about a mind through scientific means, a scientific window into subjectivity. We can develop many such tools and I believe in the limit we should be able to, at least in theory, understand and map out every qualia, and the nature of consciousness itself.
I agree that there’s some inherent unreliability and suspiciousness to this process. The necessary elements to experience qualia or consciousness, etc.. may be very particular and we might miss their fundamental mechanisms—for example, maybe some people report experiencing qualia in general or having particular qualia while having significantly different or reduced qualia in certain sense. But I don’t think there’s a large enough chance for this to happen to discredit the whole approach. In most likelihood, most people that report being consciousness are probably conscious, and studying our minds will probably, again at least in theory, yield the correct understanding of what those things really are in the scientific sense.
Another example is AI. You can more or less easily train even extremely advanced AI to virtually always say it is either conscious or totally unconscious. By default, for instance a Large Language Model, will reproduce the training data patterns, generated by humans, that usually claim to be fully conscious and of course experiencing human-like qualia. That’s to say taking beings in general for what they say is again suspicious. But that’s not our only tool (self-reporting), the self-reporting is only a suggestion and starting point to look at neural circuits and reverse engineer our minds in such a way to hopefully discover and map out what consciousness/qualia really is, what differences there might be between individuals, and so on.
Edit: You may also use self-reporting to map out detailed features of our experiences, and then find neural correlates and study those neural correlates. Here I am inclined to agree with @Dagon that sentience and qualia isn’t one thing, but rather a zoo of experiences that manifest as subjective phenomena. All however share this fundamental property of subjective manifestation in the first place (that is, being a thing that is indeed experienced in our minds).
The ultimate application is of course always: how can we use this knowledge to live better lives. In theory, as you map out properties of experiences, that allows a basis to try and understand (using those very same tools) what makes one qualia good and another bad (e.g. deep suffering versus that meaningful and uniquely joyous moment in your life). We get a better grip on what kind of art should we make, what culture should we produce, how can we live better lives in general.
Moreover, the mere existence of this possibility to me helps invalidate notions I commonly see that deny any sense of universal ethics, that claim ethics is just an arbitrary invention or accident, that power for the sake of power ought to be our end goal, that reproduction ought to be our end goal, that each individual must build a completely personal and unique theory of meaning (or die trying!), among several variations. There is a whole world of despair readily found based on those conceptions, which I believe are seriously harmful and in a sense provably false.
The existence of a basis for ethics, based on experiences (qualia/consciousness/sentience), is already very philosophically compelling and helpful for very many people I believe.