As I recall the Sequences, they are very negative on qualia as a concept. Belief in quales is a belief which does not pay rent. I am generally unconvinced humans have qualia. I don’t see that I do. There appears to be no means of demonstrating them by experiment, and so presumptively they have no predictive power and so might as well not exist; if they exist, they’re irrelevant, and therefore unparsimonious.
It does seem plausible that an abruptly-stopping mind upload is conscious. It does not seem obvious; there’s a boundary condition and consciousness is very plausibly sensitive to boundary conditions and abrupt jumps according to many varieties of the theory. Most of your claims are of this nature; if you stop making the arrogant and unjustified claims that they’re obvious, there would be no reason to make further objections, because they’re perfectly plausible.
Then nothing can be obvious.
Indeed, that’s probably true in most contexts. “Obvious” rarely if ever has explanatory or didactic power and most people’s vocabularies would be better served by dropping it. I make use of ‘seems obvious’ much more than ‘is obvious’ because it is much more useful as a statement about my mind (conveys information about my reaction) than about the world (makes a universal claim which is enormously difficult to justify).
As I recall the Sequences, they are very negative on qualia as a concept.
No, they’re not. All of them treat qualia as something existing.
I am generally unconvinced humans have qualia. I don’t see that I do.
Then you can’t believe in consciousness. Consciousness is made of qualia. Qualia are what a first-person experience is conceptually made of.
Most of your claims are of this nature; if you stop making the arrogant and unjustified claims that they’re obvious
I never claimed my claims were obvious. Please, reread the conversation.
On the other hand, your definition of obvious (where everyone has to agree something is obvious) is a definition nobody on Earth uses, and I don’t see why you are using it, except as a substitute for a technical argument.
The progress of this conversation, where you attacked me for claiming my statements were obviously true (which I didn’t claim despite it being true), and then you redefined “obvious” to mean “obvious to everyone” despite it not being how anyone uses the word, and now you are claiming you don’t even believe in qualia (when the entire conversation is about about whether LLM characters have them) but believe in consciousness, which is a contradiction, strikes me as extremely bizarre.
Are you really here for a technical conversation, or are you here to have an argument, no matter what statements you have to throw at the wall to keep it going?
If you want to have a technical conversation about the topic, I’m open to it. But if you want to talk about how nothing can ever be obvious, and about how you don’t believe in qualia (but believe in consciousness), without addressing my points, then someone else might be a better conversational partner for you.
Oh, my mistake, technically you just made sweeping claims without attempting to justify them in the slightest. That is not literally equivalent to claiming they’re obvious. However, that is the same thing in practice. If you want to say “Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.” and then never explain why you believe this to be true or defend it in any way, even when challenged—which you did—then you are, in every way that matters, claiming that it is obvious to every possible interlocutor. That no interlocutor’s doubts make it worth your time to explain yourself or defend your position. Let alone make an attempt to convince someone who has different priors, or different experiences.
(This is, of course, what people claiming something is obvious mean. That no one, or no one who counts, could possibly deny them. This is why good teachers of philosophy, mathematics, and science strongly discourage their students from getting in the habit of saying things are obvious; because that is almost never true.)
Also, I reread the parts of the Sequences about the zombie argument and I stand by what I said—they’re basically with me, that qualia are irrelevant. No useful definition of consciousness relies on qualia. If your definition of consciousness relies on qualia it is not useful, because it necessarily makes no empirical predictions. It is not quite as ridiculous as full epiphenomenal zombieism, but it is bad for the same reason.
Basic elements of conscious experience is what people mean by qualia.
An example is the feeling of pain or the perception of redness. If you know what it feels like to be in pain, or what red looks like, that is what is meant by qualia.
Given what people mean by qualia and consciousness, you can either believe in both, or disbelieve in both, but if you believe in subjective experience but disbelieve in qualia, you’re using words differently from everybody else.
(The Sequences explain that a microphysical duplicate of our body would also contain causes of us talking about qualia, which means it would also contain our qualia, which makes zombies metaphysically impossible.)
and then never explain why you believe this to be true or defend it in any way, even when challenged
I don’t think I was challenged to explain why I believe that (even though I was challenged about other things).
One other reason would be that we can imagine replacing an entire part of the brain by an I/O equivalent, but computationally non-isomorphic system, which, if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
To all appearances LLMs already do that and have for several years now. So, yes, that is clearly possible for a non-conscious thing to do.
Your definition of qualia is nonstandard, and defines it out of meaningfulness. More standard definitions generally include at least one synonym for ‘ineffable’ and I believe them to be entirely mysterious answers to mysterious questions.
To all appearances LLMs already do that and have for several years now.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn’t.)
(I’m also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
As I recall the Sequences, they are very negative on qualia as a concept. Belief in quales is a belief which does not pay rent. I am generally unconvinced humans have qualia. I don’t see that I do. There appears to be no means of demonstrating them by experiment, and so presumptively they have no predictive power and so might as well not exist; if they exist, they’re irrelevant, and therefore unparsimonious.
It does seem plausible that an abruptly-stopping mind upload is conscious. It does not seem obvious; there’s a boundary condition and consciousness is very plausibly sensitive to boundary conditions and abrupt jumps according to many varieties of the theory. Most of your claims are of this nature; if you stop making the arrogant and unjustified claims that they’re obvious, there would be no reason to make further objections, because they’re perfectly plausible.
Indeed, that’s probably true in most contexts. “Obvious” rarely if ever has explanatory or didactic power and most people’s vocabularies would be better served by dropping it. I make use of ‘seems obvious’ much more than ‘is obvious’ because it is much more useful as a statement about my mind (conveys information about my reaction) than about the world (makes a universal claim which is enormously difficult to justify).
No, they’re not. All of them treat qualia as something existing.
Then you can’t believe in consciousness. Consciousness is made of qualia. Qualia are what a first-person experience is conceptually made of.
I never claimed my claims were obvious. Please, reread the conversation.
On the other hand, your definition of obvious (where everyone has to agree something is obvious) is a definition nobody on Earth uses, and I don’t see why you are using it, except as a substitute for a technical argument.
The progress of this conversation, where you attacked me for claiming my statements were obviously true (which I didn’t claim despite it being true), and then you redefined “obvious” to mean “obvious to everyone” despite it not being how anyone uses the word, and now you are claiming you don’t even believe in qualia (when the entire conversation is about about whether LLM characters have them) but believe in consciousness, which is a contradiction, strikes me as extremely bizarre.
Are you really here for a technical conversation, or are you here to have an argument, no matter what statements you have to throw at the wall to keep it going?
If you want to have a technical conversation about the topic, I’m open to it. But if you want to talk about how nothing can ever be obvious, and about how you don’t believe in qualia (but believe in consciousness), without addressing my points, then someone else might be a better conversational partner for you.
Oh, my mistake, technically you just made sweeping claims without attempting to justify them in the slightest. That is not literally equivalent to claiming they’re obvious. However, that is the same thing in practice. If you want to say “Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.” and then never explain why you believe this to be true or defend it in any way, even when challenged—which you did—then you are, in every way that matters, claiming that it is obvious to every possible interlocutor. That no interlocutor’s doubts make it worth your time to explain yourself or defend your position. Let alone make an attempt to convince someone who has different priors, or different experiences.
(This is, of course, what people claiming something is obvious mean. That no one, or no one who counts, could possibly deny them. This is why good teachers of philosophy, mathematics, and science strongly discourage their students from getting in the habit of saying things are obvious; because that is almost never true.)
Also, I reread the parts of the Sequences about the zombie argument and I stand by what I said—they’re basically with me, that qualia are irrelevant. No useful definition of consciousness relies on qualia. If your definition of consciousness relies on qualia it is not useful, because it necessarily makes no empirical predictions. It is not quite as ridiculous as full epiphenomenal zombieism, but it is bad for the same reason.
Basic elements of conscious experience is what people mean by qualia.
An example is the feeling of pain or the perception of redness. If you know what it feels like to be in pain, or what red looks like, that is what is meant by qualia.
Given what people mean by qualia and consciousness, you can either believe in both, or disbelieve in both, but if you believe in subjective experience but disbelieve in qualia, you’re using words differently from everybody else.
(The Sequences explain that a microphysical duplicate of our body would also contain causes of us talking about qualia, which means it would also contain our qualia, which makes zombies metaphysically impossible.)
I don’t think I was challenged to explain why I believe that (even though I was challenged about other things).
Some reasons I believe it are in this comment.
One other reason would be that we can imagine replacing an entire part of the brain by an I/O equivalent, but computationally non-isomorphic system, which, if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
To all appearances LLMs already do that and have for several years now. So, yes, that is clearly possible for a non-conscious thing to do.
Your definition of qualia is nonstandard, and defines it out of meaningfulness. More standard definitions generally include at least one synonym for ‘ineffable’ and I believe them to be entirely mysterious answers to mysterious questions.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn’t.)
(I’m also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
Do you mean meaninglessness?