This post is not a polished essay but instead a list of questions to help bridge my perspective on the terminology and interpretations of Consciousness enumerated in Rafael Harths post[1]. Specifically, an effort to understand how individuals of the Camp #1 perspective cohere their worldview.
First, please correct this enumeration of how I currently understand how Consciousness is conceived in Camp 1 / Camp 2 worldviews.
Camp One-ers generally believe:
There is no thing such as qualia, including phenomenal self-aware experience
Camp Two-ers believe:
The phenomenal experience of self-awareness is intrinsically qualia whether there is an ontological basis for it or not
If this is so far correct—then disclaimer: I am most likely a Camp #2 member.
To clarify my interpretation of the terms and the motivation of this post. I believe that consciousness is the taxonomy we have applied to some general structure of matter from which perspective seems emergent. I do not think this structure mystical or to require any extra-dimensional components. I view it as a gestalt, but the gestalt itself as something worthy of an additional taxonomic label because it seems it is from which all value is applied and the basis of all normative discourse, unto which our universalist decision theories and alignment practices should stem their value functions, whether or not it is onto-logically reducible. And because of that, ignoring it, or dismissing it as non dissimilar to anything else seems in some sense, a dangerous concept to me—especially when we are brink of machines who may adopt and exercise systems of believe determined by top minds in online discourse. I view qualia as the states experienced in the interiority of whatever matter gives rise to perspective. Otherwise referred too as phenomenal valence.
If so far so good, the following questions go out to the Camp #1 crowd:
Is the suggestion that there is no phenomenal first-person self-aware experience inseparable from what we can measure, something you don’t think exists ontologically (A) or pragmatically (B)?
If A: Then why is there something rather than nothing? This sounds naive but I can’t reconcile how the absence of phenomenal experience can be given without the tacit acknowledgement that things are happening. And the experiencing of things happening being Consciousness. Denying this seems to be a non-starter since doing so would require some experience to observe and or participate in the denial. Given that—if we assume that from some loci of time and space—experience can exist—and in others it can’t or doesn’t—then why aren’t the structural forms which give rise to such something-ness not given any special taxonomic marker? Since experiencing can only be confirmed within its interiority—then isn’t the assumption that brains = experience , because I experience and I have a brain, a category error? Aren’t we assuming something is derivative from that which can only self identify with its likeness? For example, from an 1000 foot view, a person in pain may be conceived as simply a malperforming algorithm. But to approach the human-agent algorithm malperformance as being categorically more important and dissimilar to any mechanical complex optimizer, relies on some association of that external systems behaviour to a sympathetic interiority which views it as self-similar and then grants it moral consideration. How can it be acknowledged in the first person case, then viewed as moot thereafter?
If B: Then what pragmatic basis does this denial of this phenomena offer? I assume it’s denied on the basis of there being no known Science which suggestions it exists or we are phenomenally dissimilar from any other studied matter. But Science in of itself is a tool for establishing facts agents can agree on inter inter-subjectively. We cohere to Science because without it everything becomes a value argument. But without a taxonomy that distinguishes the matter wherein those inter-subjective agreements apply—and why inter-subjective agreement is useful at all—how do we even argue or articulate the value of Science, or qualify what that negotiation is useful between? If the answer is, whoever is capable and qualified to engage in Scientific discourse—does that not then risk the agents participating in Science from self-selecting to whom and what Science should apply? How do we negotiate whom should dictate its trajectory without falling into circular reference, where he who has the most science is justified in elementally determining the remainder of facts? If we say democratic processes—what equivalently impartial basis as like to Science do we have for determining the subject set it lends its utility in resolving facts for? In the reach for alignment of both humans and whatever computational equivalents we qualify as such—don’t you need a taxonomy to evaluate what count’s as value in whatever utility function we would like the answer to that question to be? And why would that which has qualia be an insufficient label for that thereof? I understand we can use observable welfare indicators (pain behaviour circuits, preference satisfaction, etc) to evaluate ‘goodness’. But how does this not formulate a definition of consciousness that is implicitly derived from the similarity of external phenomena to the interiority of our inner state sets—which seem to be denied?
Lastly, if not the noun of qualia, then what do you propose labelling what gives rise to such structures that which have experience and we fundamentally care about not hurting? Consciousness? And why is the phenomenal valence that earns it that care not just ‘qualia’ ? What other noun do we have so long as we are willing to make the tacit admission that feeling matters—on whatever ontological substrates we inevitably find it to exist ? If you can say that it theoretically doesn’t matter—but it still matters to you in way that’s never externally observed—isn’t that the phenomena itself?
I know this is a bit of a fire hose—but I can’t seem to bridge these gaps from either consuming the canon or perusing popular rationalist discourse on the matter—and I genuinely want to bridge both world views.
I’m not sure I fit in either camp. I’m certain I don’t agree with the rarely-questioned assumption in “structures that which have experience and we fundamentally care about not hurting?”. I’m not sure at all that other structures have experience similar to mine, AND I don’t think this is the only reason I care about many of them.
I assert that I experience something that I can’t detect in anyone else. I expect this is what others are calling “qualia” or “experience”, but I can’t be sure, as I can’t tell what they’re experiencing. It seems reasonable and kind to assume it’s similar, even though it may vary in intensity in ways I can’t really measure. It’s probably less intense in carrots than in whales or humans, and I suspect it’s superlinear, possibly exponential with … something something neural complexity, but that’s really pure speculation on my part, for something I can’t detect or measure.
This doesn’t stop me caring about people, animals, or things. I care about them less than myself, and I don’t seem to care in direct proportion to any simple metric. There’s a lot of something that feels inverse-square with saliency to me at the moment. I’m highly suspicious of people who claim to care in simple quanta based on simple measures, especially when they don’t act that way.
I share your uncertainty about whether a lobster, let alone a carrot, feels anything like I do, and I distrust one-number ethics.
What puzzles me is the double standard. We cheerfully use words like blue, harm, or value even though I can’t know our private images line up—yet when the word is qualia, we demand lab-grade inter-subjective proof before letting it into the taxonomy.
Why the extra burden? Physics kept “heat” on the books long before kinetic theory—its placeholder helped, never hurt. Likewise, qualia is a rough pointer that stops us from editing felt experience out of the ontology just because we can’t yet measure it.
A future optimiser that tracks disk-thrashing but not suffering will tune for the former and erase the latter. Better an imperfect pointer to the phenomenon of felt valence than a brittle catalogue of “beings that can hurt.” Qualia names the capacity-for-hurt-or-joy; identity-independent, like heat, and present wherever the right physical pattern appears.
If you had to draft a first-pass rule today, which observable features would you check to decide whether an AI system, a lobster, or a human fetus belongs in the “moral-patient” set?
And what language would you use for those features?
[Question] The Subject Of Negotiation
This post is not a polished essay but instead a list of questions to help bridge my perspective on the terminology and interpretations of Consciousness enumerated in Rafael Harths post[1]. Specifically, an effort to understand how individuals of the Camp #1 perspective cohere their worldview.
First, please correct this enumeration of how I currently understand how Consciousness is conceived in Camp 1 / Camp 2 worldviews.
Camp One-ers generally believe:
There is no thing such as qualia, including phenomenal self-aware experience
Camp Two-ers believe:
The phenomenal experience of self-awareness is intrinsically qualia whether there is an ontological basis for it or not
If this is so far correct—then disclaimer: I am most likely a Camp #2 member.
To clarify my interpretation of the terms and the motivation of this post. I believe that consciousness is the taxonomy we have applied to some general structure of matter from which perspective seems emergent. I do not think this structure mystical or to require any extra-dimensional components. I view it as a gestalt, but the gestalt itself as something worthy of an additional taxonomic label because it seems it is from which all value is applied and the basis of all normative discourse, unto which our universalist decision theories and alignment practices should stem their value functions, whether or not it is onto-logically reducible. And because of that, ignoring it, or dismissing it as non dissimilar to anything else seems in some sense, a dangerous concept to me—especially when we are brink of machines who may adopt and exercise systems of believe determined by top minds in online discourse. I view qualia as the states experienced in the interiority of whatever matter gives rise to perspective. Otherwise referred too as phenomenal valence.
If so far so good, the following questions go out to the Camp #1 crowd:
Is the suggestion that there is no phenomenal first-person self-aware experience inseparable from what we can measure, something you don’t think exists ontologically (A) or pragmatically (B)?
If A: Then why is there something rather than nothing? This sounds naive but I can’t reconcile how the absence of phenomenal experience can be given without the tacit acknowledgement that things are happening. And the experiencing of things happening being Consciousness. Denying this seems to be a non-starter since doing so would require some experience to observe and or participate in the denial. Given that—if we assume that from some loci of time and space—experience can exist—and in others it can’t or doesn’t—then why aren’t the structural forms which give rise to such something-ness not given any special taxonomic marker? Since experiencing can only be confirmed within its interiority—then isn’t the assumption that brains = experience , because I experience and I have a brain, a category error? Aren’t we assuming something is derivative from that which can only self identify with its likeness? For example, from an 1000 foot view, a person in pain may be conceived as simply a malperforming algorithm. But to approach the human-agent algorithm malperformance as being categorically more important and dissimilar to any mechanical complex optimizer, relies on some association of that external systems behaviour to a sympathetic interiority which views it as self-similar and then grants it moral consideration. How can it be acknowledged in the first person case, then viewed as moot thereafter?
If B: Then what pragmatic basis does this denial of this phenomena offer? I assume it’s denied on the basis of there being no known Science which suggestions it exists or we are phenomenally dissimilar from any other studied matter. But Science in of itself is a tool for establishing facts agents can agree on inter inter-subjectively. We cohere to Science because without it everything becomes a value argument. But without a taxonomy that distinguishes the matter wherein those inter-subjective agreements apply—and why inter-subjective agreement is useful at all—how do we even argue or articulate the value of Science, or qualify what that negotiation is useful between? If the answer is, whoever is capable and qualified to engage in Scientific discourse—does that not then risk the agents participating in Science from self-selecting to whom and what Science should apply? How do we negotiate whom should dictate its trajectory without falling into circular reference, where he who has the most science is justified in elementally determining the remainder of facts? If we say democratic processes—what equivalently impartial basis as like to Science do we have for determining the subject set it lends its utility in resolving facts for? In the reach for alignment of both humans and whatever computational equivalents we qualify as such—don’t you need a taxonomy to evaluate what count’s as value in whatever utility function we would like the answer to that question to be? And why would that which has qualia be an insufficient label for that thereof? I understand we can use observable welfare indicators (pain behaviour circuits, preference satisfaction, etc) to evaluate ‘goodness’. But how does this not formulate a definition of consciousness that is implicitly derived from the similarity of external phenomena to the interiority of our inner state sets—which seem to be denied?
Lastly, if not the noun of qualia, then what do you propose labelling what gives rise to such structures that which have experience and we fundamentally care about not hurting? Consciousness? And why is the phenomenal valence that earns it that care not just ‘qualia’ ? What other noun do we have so long as we are willing to make the tacit admission that feeling matters—on whatever ontological substrates we inevitably find it to exist ? If you can say that it theoretically doesn’t matter—but it still matters to you in way that’s never externally observed—isn’t that the phenomena itself?
I know this is a bit of a fire hose—but I can’t seem to bridge these gaps from either consuming the canon or perusing popular rationalist discourse on the matter—and I genuinely want to bridge both world views.
https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness
I’m not sure I fit in either camp. I’m certain I don’t agree with the rarely-questioned assumption in “structures that which have experience and we fundamentally care about not hurting?”. I’m not sure at all that other structures have experience similar to mine, AND I don’t think this is the only reason I care about many of them.
I assert that I experience something that I can’t detect in anyone else. I expect this is what others are calling “qualia” or “experience”, but I can’t be sure, as I can’t tell what they’re experiencing. It seems reasonable and kind to assume it’s similar, even though it may vary in intensity in ways I can’t really measure. It’s probably less intense in carrots than in whales or humans, and I suspect it’s superlinear, possibly exponential with … something something neural complexity, but that’s really pure speculation on my part, for something I can’t detect or measure.
This doesn’t stop me caring about people, animals, or things. I care about them less than myself, and I don’t seem to care in direct proportion to any simple metric. There’s a lot of something that feels inverse-square with saliency to me at the moment. I’m highly suspicious of people who claim to care in simple quanta based on simple measures, especially when they don’t act that way.
I share your uncertainty about whether a lobster, let alone a carrot, feels anything like I do, and I distrust one-number ethics.
What puzzles me is the double standard. We cheerfully use words like blue, harm, or value even though I can’t know our private images line up—yet when the word is qualia, we demand lab-grade inter-subjective proof before letting it into the taxonomy.
Why the extra burden? Physics kept “heat” on the books long before kinetic theory—its placeholder helped, never hurt. Likewise, qualia is a rough pointer that stops us from editing felt experience out of the ontology just because we can’t yet measure it.
A future optimiser that tracks disk-thrashing but not suffering will tune for the former and erase the latter. Better an imperfect pointer to the phenomenon of felt valence than a brittle catalogue of “beings that can hurt.” Qualia names the capacity-for-hurt-or-joy; identity-independent, like heat, and present wherever the right physical pattern appears.
If you had to draft a first-pass rule today, which observable features would you check to decide whether an AI system, a lobster, or a human fetus belongs in the “moral-patient” set? And what language would you use for those features?