It may not be a necessary condition, but if you want to present it as obvious, it is necessary.
No, it is not. No matter whether I want to present it as obvious or not, that condition is not necessary.
Anything short of an exact match is only allegedly the same
Your language is too imprecise. I’m not saying that an inexact behavioral match implements the same conscious states. It implements similar ones—depending on how close the behavioral match is.
until you have some research results that don’t currently exist
This is a matter of philosophy. No research results can help here, nor they are needed.
To see we don’t need an exact behavioral match for the being to keep being conscious, you can imagine a thought experiment, when someone replicates you precisely except for one input, for which instead of “I’d rather have vanilla icecream,” you will respond “I’d rather have chocolate icecream.” (Or, perhaps, from sci-fi—if the person responds exactly the same way the original would for all inputs, except for “Computer, end program,” for which the simulated person disappears.)
It is not a necessary condition for the claim that this is true. It is very much necessary for the claim that it is obvious.
Different truths are obvious to different people.
If no empirical results will make it clear
That’s not possible in principle. No matter what you empirically observe in a system, there is a possibility it’s not a quale (because perhaps you were mistaken about what constitutes a quale).
and your thought experiments certainly wouldn’t!
You suggested that an exact behavioral match might be necessary for consciousness. I gave an example where it is not necessary. That disproves your conjecture, leaving us with certain knowledge that an exact match is not necessary.
And if it’s not obvious to everyone, it isn’t obvious. That’s what it means to claim something is obvious.
That’s not possible in principle. No matter what you empirically observe in a system, there is a possibility it’s not a quale (because perhaps you were mistaken about what constitutes a quale).
Then it’s not possible in principle for it to become obvious and you should stop trying to convince people it is.
I gave an example where it is not necessary.
No, you gave examples where it was still necessary. In none of your thought experiments is it clear that the variant emulation is conscious. In the ‘chocolate ice cream’ example I’d say it is very likely it is not conscious, because you can’t just make a small change like that and not have it propagate to larger ones, and making an arbitrary spot change without that will disrupt what’s going on in a way that probably, at least temporarily, disrupts consciousness. Compare to a concussion with loss of memory, or blackout drunkenness, during which most people will agree no consciousness is taking place.
And if it’s not obvious to everyone, it isn’t obvious.
Then nothing can be obvious.
Then it’s not possible in principle for it to become obvious
Sorry, but that’s false. Establishing what qualia (or anything) are is done by reasoning (or what requirements we have for qualia, and establishing those requirements, we can then empirically discover what fits them). Once it is established, we can perform an experiment (to see if the system contains them).
In points:
Considering what a priori requirements we have for qualia
Finding the definition that matches them, or empirically discovering what matches those a priori requirements
Empirically searching the system for the referent of the definition
Empirical search can never tell us, even in principle, if we didn’t forget any requirements. That doesn’t mean it can’t become obvious. For example, we could have a mathematical proof that our requirements only imply one mathematically possible candidate.
I see in your profile you were, like me, on LessWrong from the beginning. If you recall the Sequences, qualia are the aspects of the pattern that implement our talking about consciousness.
Can you be more specific about what role you think empirical research would play?
In none of your thought experiments is it clear that the variant emulation is conscious.
The latter one could be a mind upload that implements the same computation, with the computer turning it off when it detects the specific sentence. Would you agree that mind uploads can be conscious (if they implement the same computation), or do you think biological theories of consciousness are possible?
you can’t just make a small change like that and not have it propagate to larger ones
You can have it be causally isolated from the rest. Or you can imagine another human that behaves almost exactly like you (even though their difference in behavior will be larger than a single output (which makes the idea we don’t need the exact behavior more plausible)).
As I recall the Sequences, they are very negative on qualia as a concept. Belief in quales is a belief which does not pay rent. I am generally unconvinced humans have qualia. I don’t see that I do. There appears to be no means of demonstrating them by experiment, and so presumptively they have no predictive power and so might as well not exist; if they exist, they’re irrelevant, and therefore unparsimonious.
It does seem plausible that an abruptly-stopping mind upload is conscious. It does not seem obvious; there’s a boundary condition and consciousness is very plausibly sensitive to boundary conditions and abrupt jumps according to many varieties of the theory. Most of your claims are of this nature; if you stop making the arrogant and unjustified claims that they’re obvious, there would be no reason to make further objections, because they’re perfectly plausible.
Then nothing can be obvious.
Indeed, that’s probably true in most contexts. “Obvious” rarely if ever has explanatory or didactic power and most people’s vocabularies would be better served by dropping it. I make use of ‘seems obvious’ much more than ‘is obvious’ because it is much more useful as a statement about my mind (conveys information about my reaction) than about the world (makes a universal claim which is enormously difficult to justify).
As I recall the Sequences, they are very negative on qualia as a concept.
No, they’re not. All of them treat qualia as something existing.
I am generally unconvinced humans have qualia. I don’t see that I do.
Then you can’t believe in consciousness. Consciousness is made of qualia. Qualia are what a first-person experience is conceptually made of.
Most of your claims are of this nature; if you stop making the arrogant and unjustified claims that they’re obvious
I never claimed my claims were obvious. Please, reread the conversation.
On the other hand, your definition of obvious (where everyone has to agree something is obvious) is a definition nobody on Earth uses, and I don’t see why you are using it, except as a substitute for a technical argument.
The progress of this conversation, where you attacked me for claiming my statements were obviously true (which I didn’t claim despite it being true), and then you redefined “obvious” to mean “obvious to everyone” despite it not being how anyone uses the word, and now you are claiming you don’t even believe in qualia (when the entire conversation is about about whether LLM characters have them) but believe in consciousness, which is a contradiction, strikes me as extremely bizarre.
Are you really here for a technical conversation, or are you here to have an argument, no matter what statements you have to throw at the wall to keep it going?
If you want to have a technical conversation about the topic, I’m open to it. But if you want to talk about how nothing can ever be obvious, and about how you don’t believe in qualia (but believe in consciousness), without addressing my points, then someone else might be a better conversational partner for you.
Oh, my mistake, technically you just made sweeping claims without attempting to justify them in the slightest. That is not literally equivalent to claiming they’re obvious. However, that is the same thing in practice. If you want to say “Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.” and then never explain why you believe this to be true or defend it in any way, even when challenged—which you did—then you are, in every way that matters, claiming that it is obvious to every possible interlocutor. That no interlocutor’s doubts make it worth your time to explain yourself or defend your position. Let alone make an attempt to convince someone who has different priors, or different experiences.
(This is, of course, what people claiming something is obvious mean. That no one, or no one who counts, could possibly deny them. This is why good teachers of philosophy, mathematics, and science strongly discourage their students from getting in the habit of saying things are obvious; because that is almost never true.)
Also, I reread the parts of the Sequences about the zombie argument and I stand by what I said—they’re basically with me, that qualia are irrelevant. No useful definition of consciousness relies on qualia. If your definition of consciousness relies on qualia it is not useful, because it necessarily makes no empirical predictions. It is not quite as ridiculous as full epiphenomenal zombieism, but it is bad for the same reason.
Basic elements of conscious experience is what people mean by qualia.
An example is the feeling of pain or the perception of redness. If you know what it feels like to be in pain, or what red looks like, that is what is meant by qualia.
Given what people mean by qualia and consciousness, you can either believe in both, or disbelieve in both, but if you believe in subjective experience but disbelieve in qualia, you’re using words differently from everybody else.
(The Sequences explain that a microphysical duplicate of our body would also contain causes of us talking about qualia, which means it would also contain our qualia, which makes zombies metaphysically impossible.)
and then never explain why you believe this to be true or defend it in any way, even when challenged
I don’t think I was challenged to explain why I believe that (even though I was challenged about other things).
One other reason would be that we can imagine replacing an entire part of the brain by an I/O equivalent, but computationally non-isomorphic system, which, if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
To all appearances LLMs already do that and have for several years now. So, yes, that is clearly possible for a non-conscious thing to do.
Your definition of qualia is nonstandard, and defines it out of meaningfulness. More standard definitions generally include at least one synonym for ‘ineffable’ and I believe them to be entirely mysterious answers to mysterious questions.
To all appearances LLMs already do that and have for several years now.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn’t.)
(I’m also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
No, it is not. No matter whether I want to present it as obvious or not, that condition is not necessary.
Your language is too imprecise. I’m not saying that an inexact behavioral match implements the same conscious states. It implements similar ones—depending on how close the behavioral match is.
This is a matter of philosophy. No research results can help here, nor they are needed.
To see we don’t need an exact behavioral match for the being to keep being conscious, you can imagine a thought experiment, when someone replicates you precisely except for one input, for which instead of “I’d rather have vanilla icecream,” you will respond “I’d rather have chocolate icecream.” (Or, perhaps, from sci-fi—if the person responds exactly the same way the original would for all inputs, except for “Computer, end program,” for which the simulated person disappears.)
It is not a necessary condition for the claim that this is true. It is very much necessary for the claim that it is obvious.
If no empirical results will make it clear—and your thought experiments certainly wouldn’t! - then it will never, ever be obvious.
Different truths are obvious to different people.
That’s not possible in principle. No matter what you empirically observe in a system, there is a possibility it’s not a quale (because perhaps you were mistaken about what constitutes a quale).
You suggested that an exact behavioral match might be necessary for consciousness. I gave an example where it is not necessary. That disproves your conjecture, leaving us with certain knowledge that an exact match is not necessary.
And if it’s not obvious to everyone, it isn’t obvious. That’s what it means to claim something is obvious.
Then it’s not possible in principle for it to become obvious and you should stop trying to convince people it is.
No, you gave examples where it was still necessary. In none of your thought experiments is it clear that the variant emulation is conscious. In the ‘chocolate ice cream’ example I’d say it is very likely it is not conscious, because you can’t just make a small change like that and not have it propagate to larger ones, and making an arbitrary spot change without that will disrupt what’s going on in a way that probably, at least temporarily, disrupts consciousness. Compare to a concussion with loss of memory, or blackout drunkenness, during which most people will agree no consciousness is taking place.
Also, in the spirit of a higher mutual utility gain, I propose we go back to weak-downvoting each other.
Then nothing can be obvious.
Sorry, but that’s false. Establishing what qualia (or anything) are is done by reasoning (or what requirements we have for qualia, and establishing those requirements, we can then empirically discover what fits them). Once it is established, we can perform an experiment (to see if the system contains them).
In points:
Considering what a priori requirements we have for qualia
Finding the definition that matches them, or empirically discovering what matches those a priori requirements
Empirically searching the system for the referent of the definition
Empirical search can never tell us, even in principle, if we didn’t forget any requirements. That doesn’t mean it can’t become obvious. For example, we could have a mathematical proof that our requirements only imply one mathematically possible candidate.
I see in your profile you were, like me, on LessWrong from the beginning. If you recall the Sequences, qualia are the aspects of the pattern that implement our talking about consciousness.
Can you be more specific about what role you think empirical research would play?
The latter one could be a mind upload that implements the same computation, with the computer turning it off when it detects the specific sentence. Would you agree that mind uploads can be conscious (if they implement the same computation), or do you think biological theories of consciousness are possible?
You can have it be causally isolated from the rest. Or you can imagine another human that behaves almost exactly like you (even though their difference in behavior will be larger than a single output (which makes the idea we don’t need the exact behavior more plausible)).
As I recall the Sequences, they are very negative on qualia as a concept. Belief in quales is a belief which does not pay rent. I am generally unconvinced humans have qualia. I don’t see that I do. There appears to be no means of demonstrating them by experiment, and so presumptively they have no predictive power and so might as well not exist; if they exist, they’re irrelevant, and therefore unparsimonious.
It does seem plausible that an abruptly-stopping mind upload is conscious. It does not seem obvious; there’s a boundary condition and consciousness is very plausibly sensitive to boundary conditions and abrupt jumps according to many varieties of the theory. Most of your claims are of this nature; if you stop making the arrogant and unjustified claims that they’re obvious, there would be no reason to make further objections, because they’re perfectly plausible.
Indeed, that’s probably true in most contexts. “Obvious” rarely if ever has explanatory or didactic power and most people’s vocabularies would be better served by dropping it. I make use of ‘seems obvious’ much more than ‘is obvious’ because it is much more useful as a statement about my mind (conveys information about my reaction) than about the world (makes a universal claim which is enormously difficult to justify).
No, they’re not. All of them treat qualia as something existing.
Then you can’t believe in consciousness. Consciousness is made of qualia. Qualia are what a first-person experience is conceptually made of.
I never claimed my claims were obvious. Please, reread the conversation.
On the other hand, your definition of obvious (where everyone has to agree something is obvious) is a definition nobody on Earth uses, and I don’t see why you are using it, except as a substitute for a technical argument.
The progress of this conversation, where you attacked me for claiming my statements were obviously true (which I didn’t claim despite it being true), and then you redefined “obvious” to mean “obvious to everyone” despite it not being how anyone uses the word, and now you are claiming you don’t even believe in qualia (when the entire conversation is about about whether LLM characters have them) but believe in consciousness, which is a contradiction, strikes me as extremely bizarre.
Are you really here for a technical conversation, or are you here to have an argument, no matter what statements you have to throw at the wall to keep it going?
If you want to have a technical conversation about the topic, I’m open to it. But if you want to talk about how nothing can ever be obvious, and about how you don’t believe in qualia (but believe in consciousness), without addressing my points, then someone else might be a better conversational partner for you.
Oh, my mistake, technically you just made sweeping claims without attempting to justify them in the slightest. That is not literally equivalent to claiming they’re obvious. However, that is the same thing in practice. If you want to say “Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.” and then never explain why you believe this to be true or defend it in any way, even when challenged—which you did—then you are, in every way that matters, claiming that it is obvious to every possible interlocutor. That no interlocutor’s doubts make it worth your time to explain yourself or defend your position. Let alone make an attempt to convince someone who has different priors, or different experiences.
(This is, of course, what people claiming something is obvious mean. That no one, or no one who counts, could possibly deny them. This is why good teachers of philosophy, mathematics, and science strongly discourage their students from getting in the habit of saying things are obvious; because that is almost never true.)
Also, I reread the parts of the Sequences about the zombie argument and I stand by what I said—they’re basically with me, that qualia are irrelevant. No useful definition of consciousness relies on qualia. If your definition of consciousness relies on qualia it is not useful, because it necessarily makes no empirical predictions. It is not quite as ridiculous as full epiphenomenal zombieism, but it is bad for the same reason.
Basic elements of conscious experience is what people mean by qualia.
An example is the feeling of pain or the perception of redness. If you know what it feels like to be in pain, or what red looks like, that is what is meant by qualia.
Given what people mean by qualia and consciousness, you can either believe in both, or disbelieve in both, but if you believe in subjective experience but disbelieve in qualia, you’re using words differently from everybody else.
(The Sequences explain that a microphysical duplicate of our body would also contain causes of us talking about qualia, which means it would also contain our qualia, which makes zombies metaphysically impossible.)
I don’t think I was challenged to explain why I believe that (even though I was challenged about other things).
Some reasons I believe it are in this comment.
One other reason would be that we can imagine replacing an entire part of the brain by an I/O equivalent, but computationally non-isomorphic system, which, if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
To all appearances LLMs already do that and have for several years now. So, yes, that is clearly possible for a non-conscious thing to do.
Your definition of qualia is nonstandard, and defines it out of meaningfulness. More standard definitions generally include at least one synonym for ‘ineffable’ and I believe them to be entirely mysterious answers to mysterious questions.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn’t.)
(I’m also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
Do you mean meaninglessness?