I think this is related to what Chalmers calls the “meta problem of consciousness”- the problem of why it seems subjectively undeniable that a hard problem of consciousness exists, even though it only seems possible to objectively describe “easy problems” like the question of whether a system has an internal representation of itself. Illusionism- the idea that the hard problem is illusory- is an answer to that problem, but I don’t think it fully explains things.
Consider the question “why am I me, rather than someone else”. Objectively, the question is meaningless- it’s a tautology like “why is Paris Paris”. Subjectively, however, it makes sense, because your identity in objective reality and your consciousness are different things- you can imagine “yourself” seeing the world through different eyes, with different memories and so on, even though that “yourself” doesn’t map to anything in objective reality. The statement “I am me” also seems to add predictive power to a subjective model of reality- you can reason inductively that since “you” were you in the past, you will continue to be in the future. But if someone else tells you “I am me”, that doesn’t improve your model’s predictive power at all.
I think there’s a real epistemological paradox there, possibly related somehow to the whole liar’s/Godel’s/Russell’s paradox thing. I don’t think it’s as simple as consciousness being equivalent to a system with a representation of itself.
Ah, good point. There’s also this idea of six levels of consciousness that I saw somewhere on the internets, where they say that first level is so called “survival consciousness” with easy to follow definition, that is probably equivalent to what you describe as an “easy problem”. Then it is followed by fancier, more elusive levels, and trickier questions. I find it quite confusing though, that only have the same term for all of these. As if these similar, yet different concepts were deliberately blended together for speculation purposes.
Especially it’s annoying, in the AI related debates, when one person claims that AI is perfectly capable of being conscious (implying the basic “survival consciousness”), while other claims that it can’t (implying something nearly impossible to even define). In the practical context of AI vs humankind relationships (which I guess is quite an agenda nowadays), whether it will fight for survival, whether it will find us a threat, etc. it’s perfectly enough to only consider the basic survival consciousness.
Was just watching a video on doom debates with critics of Penrose stance on AI consciousness, which he denies without a slightest hesitation, while easily giving a privilege of being conscious to animals. I mean, that’s not very useful and practical terminology then. If we say A — that AI is incapable of those higher levels of consciousness, then we need to say B, too — that animals are incapable of those levels as well. While basic level of survival consciousness is available to both. And facing something with survival consciousness + superior intelligence is already puzzling enough to focus on more practical questions than philosophical debates on higher level of consciousness. While Penrose’s position feels more like “Ok people, move along, there’s nothing to see here.”
I see the point, that broader question is not that easy to answer, but it feels wrong to put the simpler, more practical case under the same umbrella with non-trivial ones and just discard them all together. I think it leads to ridiculous claims and creates a false impression that there’s nothing to worry about, just because of poor terminology. It’s quite sad to see this confusion times and times again, hence the original post.
Did you mean these Levels of Consciousness? I think these a descriptive (and to some degree prescriptive) but not explanatory. They don’t say how these layers arise except as a developmental process, but that just pushes the explanation elsewhere.
I think this is related to what Chalmers calls the “meta problem of consciousness”- the problem of why it seems subjectively undeniable that a hard problem of consciousness exists, even though it only seems possible to objectively describe “easy problems” like the question of whether a system has an internal representation of itself. Illusionism- the idea that the hard problem is illusory- is an answer to that problem, but I don’t think it fully explains things.
Consider the question “why am I me, rather than someone else”. Objectively, the question is meaningless- it’s a tautology like “why is Paris Paris”. Subjectively, however, it makes sense, because your identity in objective reality and your consciousness are different things- you can imagine “yourself” seeing the world through different eyes, with different memories and so on, even though that “yourself” doesn’t map to anything in objective reality. The statement “I am me” also seems to add predictive power to a subjective model of reality- you can reason inductively that since “you” were you in the past, you will continue to be in the future. But if someone else tells you “I am me”, that doesn’t improve your model’s predictive power at all.
I think there’s a real epistemological paradox there, possibly related somehow to the whole liar’s/Godel’s/Russell’s paradox thing. I don’t think it’s as simple as consciousness being equivalent to a system with a representation of itself.
Ah, good point. There’s also this idea of six levels of consciousness that I saw somewhere on the internets, where they say that first level is so called “survival consciousness” with easy to follow definition, that is probably equivalent to what you describe as an “easy problem”. Then it is followed by fancier, more elusive levels, and trickier questions. I find it quite confusing though, that only have the same term for all of these. As if these similar, yet different concepts were deliberately blended together for speculation purposes.
Especially it’s annoying, in the AI related debates, when one person claims that AI is perfectly capable of being conscious (implying the basic “survival consciousness”), while other claims that it can’t (implying something nearly impossible to even define). In the practical context of AI vs humankind relationships (which I guess is quite an agenda nowadays), whether it will fight for survival, whether it will find us a threat, etc. it’s perfectly enough to only consider the basic survival consciousness.
Was just watching a video on doom debates with critics of Penrose stance on AI consciousness, which he denies without a slightest hesitation, while easily giving a privilege of being conscious to animals. I mean, that’s not very useful and practical terminology then. If we say A — that AI is incapable of those higher levels of consciousness, then we need to say B, too — that animals are incapable of those levels as well. While basic level of survival consciousness is available to both. And facing something with survival consciousness + superior intelligence is already puzzling enough to focus on more practical questions than philosophical debates on higher level of consciousness. While Penrose’s position feels more like “Ok people, move along, there’s nothing to see here.”
I see the point, that broader question is not that easy to answer, but it feels wrong to put the simpler, more practical case under the same umbrella with non-trivial ones and just discard them all together. I think it leads to ridiculous claims and creates a false impression that there’s nothing to worry about, just because of poor terminology. It’s quite sad to see this confusion times and times again, hence the original post.
Did you mean these Levels of Consciousness? I think these a descriptive (and to some degree prescriptive) but not explanatory. They don’t say how these layers arise except as a developmental process, but that just pushes the explanation elsewhere.