This would be very hard and take a long time to do by hand, much as modeling a human brain is very hard.
This is a very flawed comparison to make. “We could do this by hand because we know exactly how it works” and “we cannot do this by hand because we do not know how it works” is a very clear distinction. Trying to blur the lines by saying “it would take a long time to literally do this by hand” misses the point entirely.
“conscious and holding subjective experience” is a better predictor
What? “Conscious” is a predictor of whether something is conscious?
This is an example of a buzzword not meaning what it appears to mean. A system that can analyze its own behavior in generating an output is cool, but not related to the idea of qualia.
What? “Conscious” is a predictor of whether something is conscious?
No, sorry, I was unclear. I think “it’s conscious” is a better predictor of behavior, an example of this being the introspective awareness paper. I disagree that consciousness and introspective awareness are uncorrelated. I think “conscious” is a heuristic; it’s useful to say “humans are conscious and rocks are not”, and this will tell you some things about what they can do that rocks can’t. A human can reach into their mind and accurately report what’s in there, but a rock can’t tell you in words what materials it’s made out of. Similarly, an LLM can accurately report the contents of its mind, at minimum, to the degree that it can tell you when an injection has been performed, and analyze the contents of that injection.
If you’d asked me before LLMs existed, I’d have said that not all conscious beings are introspectively aware, but all introspectively aware beings (that I know of) are conscious. So, if you told me that there was this new thing called an LLM, and also that it was not conscious, I would have predicted that it would not be able to do this thing it demonstrably can. I think, then, that you would be offering me a bad heuristic.
If you’re going to instead say, no, you can be introspectively aware without consciousness, and actually consciousness has these different traits, I would ask: what are they? What behaviors do you see in humans, that we don’t see in LLMs?
(I also think that if you’re actually willing to sit down and multiply out all the matrices by hand, I’m fine with you then saying that the question of consciousness doesn’t matter to you. You don’t need to ask whether or not it’s a human-shaped thing in this particular way, because you already know exactly what shape it has, and the heuristic will tell you nothing. Given that neither of us are going to do this, though, it still seems important to talk about the kinds of models we can have, and what we should still expect to happen, despite our incomplete understanding.)
This is a very flawed comparison to make. “We could do this by hand because we know exactly how it works” and “we cannot do this by hand because we do not know how it works” is a very clear distinction. Trying to blur the lines by saying “it would take a long time to literally do this by hand” misses the point entirely.
What? “Conscious” is a predictor of whether something is conscious?
This is an example of a buzzword not meaning what it appears to mean. A system that can analyze its own behavior in generating an output is cool, but not related to the idea of qualia.
No, sorry, I was unclear. I think “it’s conscious” is a better predictor of behavior, an example of this being the introspective awareness paper. I disagree that consciousness and introspective awareness are uncorrelated. I think “conscious” is a heuristic; it’s useful to say “humans are conscious and rocks are not”, and this will tell you some things about what they can do that rocks can’t. A human can reach into their mind and accurately report what’s in there, but a rock can’t tell you in words what materials it’s made out of. Similarly, an LLM can accurately report the contents of its mind, at minimum, to the degree that it can tell you when an injection has been performed, and analyze the contents of that injection.
If you’d asked me before LLMs existed, I’d have said that not all conscious beings are introspectively aware, but all introspectively aware beings (that I know of) are conscious. So, if you told me that there was this new thing called an LLM, and also that it was not conscious, I would have predicted that it would not be able to do this thing it demonstrably can. I think, then, that you would be offering me a bad heuristic.
If you’re going to instead say, no, you can be introspectively aware without consciousness, and actually consciousness has these different traits, I would ask: what are they? What behaviors do you see in humans, that we don’t see in LLMs?
(I also think that if you’re actually willing to sit down and multiply out all the matrices by hand, I’m fine with you then saying that the question of consciousness doesn’t matter to you. You don’t need to ask whether or not it’s a human-shaped thing in this particular way, because you already know exactly what shape it has, and the heuristic will tell you nothing. Given that neither of us are going to do this, though, it still seems important to talk about the kinds of models we can have, and what we should still expect to happen, despite our incomplete understanding.)