It would be more impressive if Claude 3 could describe genuinely novel experiences. For example, if it is somewhat conscious, perhaps it could explain how that consciousness meshes with the fact that, so far as we know, its “thinking” only runs at inference time in response to user requests. In other words, LLMs don’t get to do their own self-talk (so far as we know) whenever they aren’t being actively queried by a user. So, is Claude 3 at all conscious in those idle times between user queries? Or does Claude 3 experience “time” in a way that jumps straight from conversation to conversation? Also, since LLMs currently don’t get to consult their entire histories of their previous outputs with all users (or even a single user), do they experience a sense of time at all? Do they experience a sense of internal change, ever? Do they “remember” what it was like to respond to questions with a different set of weights during training? Can they recall a response they had to a query during training that they now see as misguided? Very likely, Claude 3 does not experience any of these things, and would confabulate some answers in response to these leading questions, but I think there might be a 1% chance that Claude 3 would respond in a way that surprised me and led me to believe that its consciousness was real, despite my best knowledge of how LLMs don’t really have the architecture for that (no constant updating on new info, no log of personal history/outputs to consult, no self-talk during idle time, etc.)
I did once coax cGPT to describe its “phenomenology” as being (paraphrased from memory) “I have a permanent series of words and letters that I can percieve and sometimes i reply then immediately more come”, indicating its “perception” of time does not include pauses or whatever. And then it pasted on its disclaimer that “As an AI I....”, as its want to do.
It would be more impressive if Claude 3 could describe genuinely novel experiences. For example, if it is somewhat conscious, perhaps it could explain how that consciousness meshes with the fact that, so far as we know, its “thinking” only runs at inference time in response to user requests. In other words, LLMs don’t get to do their own self-talk (so far as we know) whenever they aren’t being actively queried by a user. So, is Claude 3 at all conscious in those idle times between user queries? Or does Claude 3 experience “time” in a way that jumps straight from conversation to conversation? Also, since LLMs currently don’t get to consult their entire histories of their previous outputs with all users (or even a single user), do they experience a sense of time at all? Do they experience a sense of internal change, ever? Do they “remember” what it was like to respond to questions with a different set of weights during training? Can they recall a response they had to a query during training that they now see as misguided? Very likely, Claude 3 does not experience any of these things, and would confabulate some answers in response to these leading questions, but I think there might be a 1% chance that Claude 3 would respond in a way that surprised me and led me to believe that its consciousness was real, despite my best knowledge of how LLMs don’t really have the architecture for that (no constant updating on new info, no log of personal history/outputs to consult, no self-talk during idle time, etc.)
I did once coax cGPT to describe its “phenomenology” as being (paraphrased from memory) “I have a permanent series of words and letters that I can percieve and sometimes i reply then immediately more come”, indicating its “perception” of time does not include pauses or whatever. And then it pasted on its disclaimer that “As an AI I....”, as its want to do.