It feels like you’re just changing the name of the confusing thing from ‘the fact that I seem conscious to myself’ to ‘the fact that I’m experiencing an illusion of consciousness.’ Cool, but, like, there’s still a mysterious thing that seems quite important to actually explain.
I don’t actually agree. Although I have not fully explained consciousness, I think that I have shown a lot.
In particular, I have shown us what the solution to the hard problem of consciousness would plausibly look like if we had unlimited funding and time. And to me, that’s important.
And under my view, it’s not going to look anything like, “Hey we discovered this mechanism in the brain that gives rise to consciousness.” No, it’s going to look more like, “Look at this mechanism in the brain that makes humans talk about things even though the things they are talking about have no real world referent.”
You might think that this is a useless achievement. I claim the contrary. As Chalmers points out, pretty much all the leading theories of consciousness fail the basic test of looking like an explanation rather than just sounding confused. Don’t believe me? Read Section 3 in this paper.
In short, Chalmers reviews the current state of the art in consciousness explanations. He first goes into Integrated Information Theory (IIT), but then convincingly shows that IIT fails to explain why we would talk about consciousness and believe in consciousness. He does the same for global workspace theories, first order representational theories, higher order theories, consciousness-causes-collapse theories, and panpsychism. Simply put, none of them even approach an adequate baseline of looking like an explanation.
I also believe that if you follow my view carefully you might stop being confused about a lot of things. Like, do animals feel pain? Well it depends on your definition of pain—consciousness is not real in any objective sense so this is a definition dispute. Same with asking whether person A is happier than person B, or asking whether computers will ever be conscious.
Perhaps this isn’t an achievement strictly speaking relative to the standard Lesswrong points of view. But that’s only because I think the standard Lesswrong point of view is correct. Yet even so, I still see people around me making fundamentally basic mistakes about consciousness. For instance, I see people treating consciousness as intrinsic, ineffable, private—or they think there’s an objectively right answer to whether animals feel pain and argue over this as if it’s not the same as a tree falling in a forest.
I don’t actually agree. Although I have not fully explained consciousness, I think that I have shown a lot.
In particular, I have shown us what the solution to the hard problem of consciousness would plausibly look like if we had unlimited funding and time. And to me, that’s important.
And under my view, it’s not going to look anything like, “Hey we discovered this mechanism in the brain that gives rise to consciousness.” No, it’s going to look more like, “Look at this mechanism in the brain that makes humans talk about things even though the things they are talking about have no real world referent.”
You might think that this is a useless achievement. I claim the contrary. As Chalmers points out, pretty much all the leading theories of consciousness fail the basic test of looking like an explanation rather than just sounding confused. Don’t believe me? Read Section 3 in this paper.
In short, Chalmers reviews the current state of the art in consciousness explanations. He first goes into Integrated Information Theory (IIT), but then convincingly shows that IIT fails to explain why we would talk about consciousness and believe in consciousness. He does the same for global workspace theories, first order representational theories, higher order theories, consciousness-causes-collapse theories, and panpsychism. Simply put, none of them even approach an adequate baseline of looking like an explanation.
I also believe that if you follow my view carefully you might stop being confused about a lot of things. Like, do animals feel pain? Well it depends on your definition of pain—consciousness is not real in any objective sense so this is a definition dispute. Same with asking whether person A is happier than person B, or asking whether computers will ever be conscious.
Perhaps this isn’t an achievement strictly speaking relative to the standard Lesswrong points of view. But that’s only because I think the standard Lesswrong point of view is correct. Yet even so, I still see people around me making fundamentally basic mistakes about consciousness. For instance, I see people treating consciousness as intrinsic, ineffable, private—or they think there’s an objectively right answer to whether animals feel pain and argue over this as if it’s not the same as a tree falling in a forest.