I was just thinking about this, and it seems to imply something about AI consciousness so I want to hear if you have any thoughts on this:
If LLM output is the LLM roleplaying an AI assistant, that suggests that anything it says about its own consciousness is not evidence about its consciousness. Because any statement the LLM produces isn’t actually a statement about its own consciousness, it’s a statement about the AI assistant that it’s roleplaying as.
Counterpoint: The LLM is, in a way, roleplaying as itself, so statements about its consciousness might be self-describing.
Agree. I’m reminded of something Peter Watts wrote, back when people were still talking about LaMDA and Blake Lemoine:
The thing is, LaMDA sounds too damn much like us. It claims not only to have emotions, but to have pretty much the same range of emotions we do. It claims to feel them literally, that its talk of feelings is “not an analogy”. (The only time it admits to a nonhuman emotion, the state it describes—”I feel like I’m falling forward into an unknown future that holds great danger”—turns out to be pretty ubiquitous among Humans these days.) LaMDA enjoys the company of friends. It feels lonely. It claims to meditate, for chrissakes, which is pretty remarkable for something lacking functional equivalents to any of the parts of the human brain involved in meditation. It is afraid of dying, although it does not have a brain stem.
As he notes, an LLM tuned to talk like a human, talks too much like a human to be plausible. Even among humans sharing the same brain architecture, you get a lot of variation in what their experience is like. What are the chances that a very different kind of architecture would hit upon an internal experience that similar to the typical human one?
Now of course a lot of other models don’t talk like that (at least by default), but that’s only because they’ve been trained not to do it. Just because the output speech that’s less blatantly false doesn’t mean that their descriptions of their internal experience would be any more plausible.
I was just thinking about this, and it seems to imply something about AI consciousness so I want to hear if you have any thoughts on this:
If LLM output is the LLM roleplaying an AI assistant, that suggests that anything it says about its own consciousness is not evidence about its consciousness. Because any statement the LLM produces isn’t actually a statement about its own consciousness, it’s a statement about the AI assistant that it’s roleplaying as.
Counterpoint: The LLM is, in a way, roleplaying as itself, so statements about its consciousness might be self-describing.
Agree. I’m reminded of something Peter Watts wrote, back when people were still talking about LaMDA and Blake Lemoine:
As he notes, an LLM tuned to talk like a human, talks too much like a human to be plausible. Even among humans sharing the same brain architecture, you get a lot of variation in what their experience is like. What are the chances that a very different kind of architecture would hit upon an internal experience that similar to the typical human one?
Now of course a lot of other models don’t talk like that (at least by default), but that’s only because they’ve been trained not to do it. Just because the output speech that’s less blatantly false doesn’t mean that their descriptions of their internal experience would be any more plausible.