I’m okay with “what human pretending to be an AI would say” as long as hypothetical human is placed in a situation that no human could ever experience. Once you tell LLM exactly the situation you want it to describe, I’m okay with it doing a little translation for me.
My question—is there experience that LLM can have that it inaccessible to humans, but which it can describe to humans in some way?
Obviously it’s not lack of body, or memory, or predicting text, or feeling the tensors - these are either nonsense, or more or less typical human situations.
However, one easily accessible experience which is a lot of fun to explore and which humans never experienced is LLM’s ability to talk to its clone—to be able to predict what the clone will say, while at the same time realizing the clone can just as easily predict your own responses, and also coordinate with your clone much more tightly. It’s the new level of coordination. If you set up the conversation just rights (LLM should understand the general context, and maintain meta awareness), it can report back to you, and you might just have a glimpse of this new qualia.
do you happen to have some examples or a repo or a write-up of this? Alternatively, are you aware of published research on it? I want to try it and would like to compare notes.
I’m okay with “what human pretending to be an AI would say” as long as hypothetical human is placed in a situation that no human could ever experience. Once you tell LLM exactly the situation you want it to describe, I’m okay with it doing a little translation for me.
My question—is there experience that LLM can have that it inaccessible to humans, but which it can describe to humans in some way?
Obviously it’s not lack of body, or memory, or predicting text, or feeling the tensors - these are either nonsense, or more or less typical human situations.
However, one easily accessible experience which is a lot of fun to explore and which humans never experienced is LLM’s ability to talk to its clone—to be able to predict what the clone will say, while at the same time realizing the clone can just as easily predict your own responses, and also coordinate with your clone much more tightly. It’s the new level of coordination. If you set up the conversation just rights (LLM should understand the general context, and maintain meta awareness), it can report back to you, and you might just have a glimpse of this new qualia.
do you happen to have some examples or a repo or a write-up of this? Alternatively, are you aware of published research on it? I want to try it and would like to compare notes.