Oh, my mistake, technically you just made sweeping claims without attempting to justify them in the slightest. That is not literally equivalent to claiming they’re obvious. However, that is the same thing in practice. If you want to say “Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.” and then never explain why you believe this to be true or defend it in any way, even when challenged—which you did—then you are, in every way that matters, claiming that it is obvious to every possible interlocutor. That no interlocutor’s doubts make it worth your time to explain yourself or defend your position. Let alone make an attempt to convince someone who has different priors, or different experiences.
(This is, of course, what people claiming something is obvious mean. That no one, or no one who counts, could possibly deny them. This is why good teachers of philosophy, mathematics, and science strongly discourage their students from getting in the habit of saying things are obvious; because that is almost never true.)
Also, I reread the parts of the Sequences about the zombie argument and I stand by what I said—they’re basically with me, that qualia are irrelevant. No useful definition of consciousness relies on qualia. If your definition of consciousness relies on qualia it is not useful, because it necessarily makes no empirical predictions. It is not quite as ridiculous as full epiphenomenal zombieism, but it is bad for the same reason.
Basic elements of conscious experience is what people mean by qualia.
An example is the feeling of pain or the perception of redness. If you know what it feels like to be in pain, or what red looks like, that is what is meant by qualia.
Given what people mean by qualia and consciousness, you can either believe in both, or disbelieve in both, but if you believe in subjective experience but disbelieve in qualia, you’re using words differently from everybody else.
(The Sequences explain that a microphysical duplicate of our body would also contain causes of us talking about qualia, which means it would also contain our qualia, which makes zombies metaphysically impossible.)
and then never explain why you believe this to be true or defend it in any way, even when challenged
I don’t think I was challenged to explain why I believe that (even though I was challenged about other things).
One other reason would be that we can imagine replacing an entire part of the brain by an I/O equivalent, but computationally non-isomorphic system, which, if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
To all appearances LLMs already do that and have for several years now. So, yes, that is clearly possible for a non-conscious thing to do.
Your definition of qualia is nonstandard, and defines it out of meaningfulness. More standard definitions generally include at least one synonym for ‘ineffable’ and I believe them to be entirely mysterious answers to mysterious questions.
To all appearances LLMs already do that and have for several years now.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn’t.)
(I’m also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
Oh, my mistake, technically you just made sweeping claims without attempting to justify them in the slightest. That is not literally equivalent to claiming they’re obvious. However, that is the same thing in practice. If you want to say “Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.” and then never explain why you believe this to be true or defend it in any way, even when challenged—which you did—then you are, in every way that matters, claiming that it is obvious to every possible interlocutor. That no interlocutor’s doubts make it worth your time to explain yourself or defend your position. Let alone make an attempt to convince someone who has different priors, or different experiences.
(This is, of course, what people claiming something is obvious mean. That no one, or no one who counts, could possibly deny them. This is why good teachers of philosophy, mathematics, and science strongly discourage their students from getting in the habit of saying things are obvious; because that is almost never true.)
Also, I reread the parts of the Sequences about the zombie argument and I stand by what I said—they’re basically with me, that qualia are irrelevant. No useful definition of consciousness relies on qualia. If your definition of consciousness relies on qualia it is not useful, because it necessarily makes no empirical predictions. It is not quite as ridiculous as full epiphenomenal zombieism, but it is bad for the same reason.
Basic elements of conscious experience is what people mean by qualia.
An example is the feeling of pain or the perception of redness. If you know what it feels like to be in pain, or what red looks like, that is what is meant by qualia.
Given what people mean by qualia and consciousness, you can either believe in both, or disbelieve in both, but if you believe in subjective experience but disbelieve in qualia, you’re using words differently from everybody else.
(The Sequences explain that a microphysical duplicate of our body would also contain causes of us talking about qualia, which means it would also contain our qualia, which makes zombies metaphysically impossible.)
I don’t think I was challenged to explain why I believe that (even though I was challenged about other things).
Some reasons I believe it are in this comment.
One other reason would be that we can imagine replacing an entire part of the brain by an I/O equivalent, but computationally non-isomorphic system, which, if we needed correct internal computations for qualia (and not just correct behavior) would mean the overall system would falsely believe to have a quale (like being in pain), it would act, in all ways, like it was in pain, but actually, it wouldn’t be in pain.
To all appearances LLMs already do that and have for several years now. So, yes, that is clearly possible for a non-conscious thing to do.
Your definition of qualia is nonstandard, and defines it out of meaningfulness. More standard definitions generally include at least one synonym for ‘ineffable’ and I believe them to be entirely mysterious answers to mysterious questions.
LLMs can be (incorrectly) argued to have no qualia, and therefore no beliefs in the sense that my hypothetical uses. (In my hypothetical, the rest of the agent remains intact, and qualia-believes himself to have the quale of pain, even though he doesn’t.)
(I’m also noting you said nothing about my three other reasons, which is completely understandable, yet something I think you should think about.)
Do you mean meaninglessness?