I’m thinking about a top-level post on heterophenomenology. I’d like to hear from people who don’t believe in p-zombies, but don’t think that heterophenomenology is enough to set out the problem of consciousness, on why you don’t see a contradiction between those two positions. Thanks!
Ok, I don’t believe in p-zombies, in the standard sense of there being a logically possible world which is physically identical to this one, but where the inhabitants are not conscious. But I do believe that someone (perhaps a super-intelligent being) could possibly emulate my outward behavior perfectly, while having very different conscious experiences on the inside (i.e., by lying). I don’t think you can distinguish between these two cases without reference to what I’m really experiencing, as opposed to just what I say about what I experience.
This is a traditional objection to the “behaviorism” of philosophers such as Carnap. I recall arguing in an undergraduate term paper that this was a misunderstanding of behaviorism: there is no reason that “behavior” should not encompass e.g. the behavior of neurons, which are in principle just as publicly observable as a subject’s verbal behavior. So the question is whether any being could have a brain observably identical to yours and yet have different experiences.
Ok, let me try this again. I want a way to map between the internal and external views of a mind. That is, given what I know about what I’m experiencing, what can I deduce about the physical structure of my brain? And given a physical description of a mind, what can I know about what it is experiencing? Perhaps this is already considered a legitimate part of the problem of consciousness according to heterophenomenology, or “behaviorism” (are they the same thing?), but if so I think it’s at least not a part of the problem that those approaches tend to emphasize. In any case, I’d appreciate it if ciphergoth could address this topic a bit in his post.
(Why am I so interested in this part of the problem? Mainly because I need the solution in order for UDT1 to be usable by human beings.)
But a theory that people are deliberately lying about their internal experience isn’t really going to fly. Even kids ask “when you see red, do you see the same colour I do?”. No-one prompted them to lie.
I’m thinking about a top-level post on heterophenomenology. I’d like to hear from people who don’t believe in p-zombies, but don’t think that heterophenomenology is enough to set out the problem of consciousness, on why you don’t see a contradiction between those two positions. Thanks!
Ok, I don’t believe in p-zombies, in the standard sense of there being a logically possible world which is physically identical to this one, but where the inhabitants are not conscious. But I do believe that someone (perhaps a super-intelligent being) could possibly emulate my outward behavior perfectly, while having very different conscious experiences on the inside (i.e., by lying). I don’t think you can distinguish between these two cases without reference to what I’m really experiencing, as opposed to just what I say about what I experience.
This is a traditional objection to the “behaviorism” of philosophers such as Carnap. I recall arguing in an undergraduate term paper that this was a misunderstanding of behaviorism: there is no reason that “behavior” should not encompass e.g. the behavior of neurons, which are in principle just as publicly observable as a subject’s verbal behavior. So the question is whether any being could have a brain observably identical to yours and yet have different experiences.
Ok, let me try this again. I want a way to map between the internal and external views of a mind. That is, given what I know about what I’m experiencing, what can I deduce about the physical structure of my brain? And given a physical description of a mind, what can I know about what it is experiencing? Perhaps this is already considered a legitimate part of the problem of consciousness according to heterophenomenology, or “behaviorism” (are they the same thing?), but if so I think it’s at least not a part of the problem that those approaches tend to emphasize. In any case, I’d appreciate it if ciphergoth could address this topic a bit in his post.
(Why am I so interested in this part of the problem? Mainly because I need the solution in order for UDT1 to be usable by human beings.)
But a theory that people are deliberately lying about their internal experience isn’t really going to fly. Even kids ask “when you see red, do you see the same colour I do?”. No-one prompted them to lie.