Yes, if you are using “conscious” with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It’s phenomenal consciousness which your computer lacks.
Scott Aaronson’s “Pretty hard problem of consciousness”, which shminux mentions, is relevant here, but an additional point about phenomenal consciousness cuts some ice here, when it comes to your computer. Phenomenal consciousness allows us to distinguish between “appearance” and “reality”. For example, you can say that a painting appears to be moving, but that’s because you took LSD, and you know that it is really stationary. For a number of modes of information-gathering, nature has equipped us with internal access to our own states (subjective colors, sounds, etc.) as well as the external world-properties themselves. That’s something today’s computers (outside of an AI lab maybe) don’t do. They represent the world, but they don’t independently represent their own visual/auditory/etc. states.
That said, you could add an appearance-reality distinction to a computer’s repertoire, and it wouldn’t be obvious that full consciousness was achieved. Ultimately I suspect Scott Aaronson’s “Pretty hard problem of consciousness” is the key.
Thanks for this reply; this is the kind of quarter that seemed most promising for usefulness to ‘consciousness’.
Yes, if you are using “conscious” with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It’s phenomenal consciousness which your computer lacks.
I am confused about qualia. Qualia has strong features of a confused concept, such that if ‘consciousness’ is getting at a qualia-nonqualia distinction, then it would seem to be a recursive or fractal confusion. If qualia is to be a non-epiphenomenal concept, then there must be non-mysterious differences one could in principle point to to distinguish qualia-havers from non-qualia-havers. History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same, or at least the same according to whatever criteria we might care about.
It feels to me like qualia is used in an epiphenomenal way. But if it is to be non-confused, it cannot be; it must refer to sets of statements like, ‘This thing reacts in this way when it is poked with a needle, this way when UV light hits its eyes, …’ or something (possibly less boring propositions, but still fundamentally non-mysterious ones).
Insomuch as ‘consciousness’ depends on the notion of ‘qualia’, I am very wary of its usage, because then a less-likely-to-be-confused concept (consciousness) is being used in terms of a very dubious, more-likely-to-be-confused concept (qualia). If we’re using ‘consciousness’ as a byword for qualia, then we should just say ‘qualia’ and be open about the fact that we’re implicitly touching upon the (hard) problem of consciousness, which is at best very confusing and difficult and at worst a philosophical quagmire, so that we do not become overconfident in what we are saying or that what we are saying is even meaningful.
Eliezer has his thing where he refers to Magical Reality Fluid to lampshade his confusion. Using ‘consciousness’ to smuggle in qualia feels like the opposite of that approach.
For all this skepticism, I do worry that those who dismiss qualia outright are being foolishly hasty.
[Rest of comment]
I don’t think ‘consciousness’ can be justified on grounds of this type of representational phenomenology. Good phenomenology (e.g. converging on a theory like that red is to do with wavelengths of light within a mature theory of electromagnetism) is something roughly like getting useful mappings from terms (phenomena/lossy observations) to interpretations (specific accounts of those phenomena, e.g. a computer-checkable mathematization of the stimulus-phenomenon pair). That might be somewhat mysterious, but it doesn’t feel like the same way I’m confused about qualia or that most people seem to be confused about consciousness. As you say, it’s not clear that something good at figuring out the world giving it phenomena even need be conscious.
Ultimately I suspect Scott Aaronson’s “Pretty hard problem of consciousness” is the key.
I’m not sure if this is possibly covered by ‘is the key’, but it seems to me that discussion of Scott Aaronson’s PHPC is potentially tricksome in the same way as Chalmers’ HPC, namely that discussions of it are often framed in terms of ‘What is the Platonic Essence of Consciousness’, rather than, ‘Why should we think consciousness has a Platonic Essence and is fundamental? And if it is, what is that Essence?’
I submit that it is (many of) the theories and arguments that are confused, not the concept. The concept has some semantic vagueness, but that’s not necessarily fatal (compare “heap”).
History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same
If “structurally identical” applies at the level of algorithms—see thesis #5 and “consistent position” #2 in this post by dfranke—then I agree.
It feels to me like qualia is used in an epiphenomenal way.
That happens when people embrace some of the confused theories. Then comes the attack of the p-zombies.
I’m all in favor of talking openly about qualia, because that is the hard problem fueling the bad metaphysics, not access consciousness. Self-consciousness can also be tricky, but in good part because it aggravates qualia problems. But I don’t think the hard problem is an inescapable quagmire. Instead, the intersection of self-reference (with all its “paradoxes”) and the appearance/reality distinction creates some unique conditions, in which many of our generally-applicable epistemic models and causal reasoning patterns fail. If you’ve got time for a book, I recommend Jenann Ismael’s The Situated Self, which in spots could have been better written, but is well worth the effort. This paper covers a lot, too.
(e.g. converging on a theory like that red is to do with wavelengths of light within a mature theory of electromagnetism)
That’s the reality side of redness; what people puzzle over is the relations between appearances (e.g. inverted spectrum worries). Maybe I misunderstand you. My claim is that the fact that appearances are mere appearances definitely does contribute to the hardness of the hard problem.
I don’t think qualia and consciousness are fundamental in any of the usual senses—like basic particles? And I have no idea how simple and elegant an Essence has to be before it becomes Platonic. But humans think in prototypes and metaphors, and we get along just fine. We don’t need to have an answer to every conceivable edge-case in order to make productive use of a concept. Nor do we need such precision even to see, in rough outline, how the referents of the concept, in the cases that interest us, would be tractable using our best scientific theories.
I am confused about qualia. Qualia has strong features of a confused concept, such that if ‘consciousness’ is getting at a qualia-nonqualia distinction, then it would seem to be a recursive or fractal confusio
Why? Do you the consciousness is defined in terms of qualia, and that qualia are in turn defined in terms of consciousness?
If qualia is to be a non-epiphenomenal concept, then there must be non-mysterious differences one could in principle point to to distinguish qualia-havers from non-qualia-havers.
Yes. Must be doesn’t imply must be knowable, though.
History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same, or at least the same according to whatever criteria we might care about
The criteria we care about is the killer, through, An exact duplicate all the way down would be an exact duplicate, and therefore not running on a different substrate. What you are therefore talking about is a duplicate of the relevant subset of structure, running on a different substrate. But knowing what the relevant subset is is no easier than the Hard Problem.
.It feels to me like qualia is used in an epiphenomenal way.
The simplistic theory that qualia are distinct from physics has that problem. The simplistic theory that qualia are identical to physics has the problem that no one can somehow that works. The simplistic theory that qualia don’t exist at all has the problem that I have them all the time.
However,none of that has much to do with the definition of qualia.
If we had a good theory of qualia we would know what causes them an what they cause. But we need the word qualia to point out what we don’t have them good theory of. When you complain that qualia seem epiphenomenal,
what you are actually complaining about is the lack of a solution of the HP.
But if it is to be non-confused, it cannot be; it must refer to sets of statements like, ’This thing reacts in this way when it is poked with a needle, this way when UV light hits its ey
Why? Why can’t it mean “the ways things seem to a subject” or “an aspect of consciousness we don’t understand”, or both?
We don’t know the reference of “qualia”, right enough, but that does not mean the sense is a problem.
...′ or something (possibly less boring propositions, but still fundamentally non-mysterious ones).Insomuch as ‘consciousness’ depends on the notion of ‘qualia’, I am very wary of its usage, because then a less-likely-to-be-confused concept (consciousness) is being used in terms of a very dubious, more-likely-to-be-confused concept (qualia).
Why is it more confused? On the face of it,qualia, labels a particular aspect of consciousness. Surely that would make it more precise.
For a number of modes of information-gathering, nature has equipped us with internal access to our own states (subjective colors, sounds, etc.) as well as the external world-properties themselves. That’s something today’s computers (outside of an AI lab maybe) don’t do.
Surely any computer that controls an automated proccess must do this?
Consider, for example, a robotic arm used to manufacture a car. The software knows that if the arm moves like so, then it will be holding the door in the right place to be attached; and it knows this before it actually moves the arm. So it must have an internal knowledge of its own state, and of possible future states.
I was focusing on perceptual channels, so your motor-channel example would be analogous, but not the same. If the robot uses proprioception to locate the arm, and if it makes an appearance/reality distinction on the proprioceptive information, then you have a true example.
Assuming for the moment that the robot has a sensor of some type on each joint, that can tell it at which angle that point is being held; that would be a robotic form of proprioception.
And if it considers hypothetical future states of the arm, as it must do in order to safely move the arm, then it must consider what proprioceptive information it expects to get from the arm, and compare this to the reality (the actual sensor value changes) during the movement of the arm.
I think that’s an example of what you’re talking about...
One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn’t suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you’ve got it.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it’s probably hard to see what I’m getting at without knowing why I think it’s relevant.
I’ve seen a system that I’m pretty sure fulfills your criteria—it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the “phenomenal consciousness” that you describe; but I would not call that system any more or less conscious than any other computer.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc.
Ah—surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
Yes, if you are using “conscious” with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It’s phenomenal consciousness which your computer lacks.
Scott Aaronson’s “Pretty hard problem of consciousness”, which shminux mentions, is relevant here, but an additional point about phenomenal consciousness cuts some ice here, when it comes to your computer. Phenomenal consciousness allows us to distinguish between “appearance” and “reality”. For example, you can say that a painting appears to be moving, but that’s because you took LSD, and you know that it is really stationary. For a number of modes of information-gathering, nature has equipped us with internal access to our own states (subjective colors, sounds, etc.) as well as the external world-properties themselves. That’s something today’s computers (outside of an AI lab maybe) don’t do. They represent the world, but they don’t independently represent their own visual/auditory/etc. states.
That said, you could add an appearance-reality distinction to a computer’s repertoire, and it wouldn’t be obvious that full consciousness was achieved. Ultimately I suspect Scott Aaronson’s “Pretty hard problem of consciousness” is the key.
Thanks for this reply; this is the kind of quarter that seemed most promising for usefulness to ‘consciousness’.
I am confused about qualia. Qualia has strong features of a confused concept, such that if ‘consciousness’ is getting at a qualia-nonqualia distinction, then it would seem to be a recursive or fractal confusion. If qualia is to be a non-epiphenomenal concept, then there must be non-mysterious differences one could in principle point to to distinguish qualia-havers from non-qualia-havers. History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same, or at least the same according to whatever criteria we might care about.
It feels to me like qualia is used in an epiphenomenal way. But if it is to be non-confused, it cannot be; it must refer to sets of statements like, ‘This thing reacts in this way when it is poked with a needle, this way when UV light hits its eyes, …’ or something (possibly less boring propositions, but still fundamentally non-mysterious ones).
Insomuch as ‘consciousness’ depends on the notion of ‘qualia’, I am very wary of its usage, because then a less-likely-to-be-confused concept (consciousness) is being used in terms of a very dubious, more-likely-to-be-confused concept (qualia). If we’re using ‘consciousness’ as a byword for qualia, then we should just say ‘qualia’ and be open about the fact that we’re implicitly touching upon the (hard) problem of consciousness, which is at best very confusing and difficult and at worst a philosophical quagmire, so that we do not become overconfident in what we are saying or that what we are saying is even meaningful.
Eliezer has his thing where he refers to Magical Reality Fluid to lampshade his confusion. Using ‘consciousness’ to smuggle in qualia feels like the opposite of that approach.
For all this skepticism, I do worry that those who dismiss qualia outright are being foolishly hasty.
I don’t think ‘consciousness’ can be justified on grounds of this type of representational phenomenology. Good phenomenology (e.g. converging on a theory like that red is to do with wavelengths of light within a mature theory of electromagnetism) is something roughly like getting useful mappings from terms (phenomena/lossy observations) to interpretations (specific accounts of those phenomena, e.g. a computer-checkable mathematization of the stimulus-phenomenon pair). That might be somewhat mysterious, but it doesn’t feel like the same way I’m confused about qualia or that most people seem to be confused about consciousness. As you say, it’s not clear that something good at figuring out the world giving it phenomena even need be conscious.
I’m not sure if this is possibly covered by ‘is the key’, but it seems to me that discussion of Scott Aaronson’s PHPC is potentially tricksome in the same way as Chalmers’ HPC, namely that discussions of it are often framed in terms of ‘What is the Platonic Essence of Consciousness’, rather than, ‘Why should we think consciousness has a Platonic Essence and is fundamental? And if it is, what is that Essence?’
I submit that it is (many of) the theories and arguments that are confused, not the concept. The concept has some semantic vagueness, but that’s not necessarily fatal (compare “heap”).
If “structurally identical” applies at the level of algorithms—see thesis #5 and “consistent position” #2 in this post by dfranke—then I agree.
That happens when people embrace some of the confused theories. Then comes the attack of the p-zombies.
I’m all in favor of talking openly about qualia, because that is the hard problem fueling the bad metaphysics, not access consciousness. Self-consciousness can also be tricky, but in good part because it aggravates qualia problems. But I don’t think the hard problem is an inescapable quagmire. Instead, the intersection of self-reference (with all its “paradoxes”) and the appearance/reality distinction creates some unique conditions, in which many of our generally-applicable epistemic models and causal reasoning patterns fail. If you’ve got time for a book, I recommend Jenann Ismael’s The Situated Self, which in spots could have been better written, but is well worth the effort. This paper covers a lot, too.
That’s the reality side of redness; what people puzzle over is the relations between appearances (e.g. inverted spectrum worries). Maybe I misunderstand you. My claim is that the fact that appearances are mere appearances definitely does contribute to the hardness of the hard problem.
I don’t think qualia and consciousness are fundamental in any of the usual senses—like basic particles? And I have no idea how simple and elegant an Essence has to be before it becomes Platonic. But humans think in prototypes and metaphors, and we get along just fine. We don’t need to have an answer to every conceivable edge-case in order to make productive use of a concept. Nor do we need such precision even to see, in rough outline, how the referents of the concept, in the cases that interest us, would be tractable using our best scientific theories.
Why? Do you the consciousness is defined in terms of qualia, and that qualia are in turn defined in terms of consciousness?
Yes. Must be doesn’t imply must be knowable, though.
The criteria we care about is the killer, through, An exact duplicate all the way down would be an exact duplicate, and therefore not running on a different substrate. What you are therefore talking about is a duplicate of the relevant subset of structure, running on a different substrate. But knowing what the relevant subset is is no easier than the Hard Problem.
The simplistic theory that qualia are distinct from physics has that problem. The simplistic theory that qualia are identical to physics has the problem that no one can somehow that works. The simplistic theory that qualia don’t exist at all has the problem that I have them all the time.
However,none of that has much to do with the definition of qualia.
If we had a good theory of qualia we would know what causes them an what they cause. But we need the word qualia to point out what we don’t have them good theory of. When you complain that qualia seem epiphenomenal, what you are actually complaining about is the lack of a solution of the HP.
Why? Why can’t it mean “the ways things seem to a subject” or “an aspect of consciousness we don’t understand”, or both?
We don’t know the reference of “qualia”, right enough, but that does not mean the sense is a problem.
Why is it more confused? On the face of it,qualia, labels a particular aspect of consciousness. Surely that would make it more precise.
Surely any computer that controls an automated proccess must do this?
Consider, for example, a robotic arm used to manufacture a car. The software knows that if the arm moves like so, then it will be holding the door in the right place to be attached; and it knows this before it actually moves the arm. So it must have an internal knowledge of its own state, and of possible future states.
Isn’t that exactly what you describe here?
I was focusing on perceptual channels, so your motor-channel example would be analogous, but not the same. If the robot uses proprioception to locate the arm, and if it makes an appearance/reality distinction on the proprioceptive information, then you have a true example.
Hmmm.
Assuming for the moment that the robot has a sensor of some type on each joint, that can tell it at which angle that point is being held; that would be a robotic form of proprioception.
And if it considers hypothetical future states of the arm, as it must do in order to safely move the arm, then it must consider what proprioceptive information it expects to get from the arm, and compare this to the reality (the actual sensor value changes) during the movement of the arm.
I think that’s an example of what you’re talking about...
One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn’t suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you’ve got it.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it’s probably hard to see what I’m getting at without knowing why I think it’s relevant.
Ah, thank you. That makes it a lot clearer.
I’ve seen a system that I’m pretty sure fulfills your criteria—it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the “phenomenal consciousness” that you describe; but I would not call that system any more or less conscious than any other computer.
Ah—surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
That (thought selection) seems like a good angle. I just wanted to throw out a necessary condition for phenomenal consciousness, not a sufficient one.