It could unpack it in the same instance because the original was still in the context window.
Omission of letters is commonly used in chats, was used in telegrams, many written languages were not using vowels and/or whitespaces, or used hyeroglyphs. So it by no means is original.
GPT/Bing has some self-awareness. For example, it explicitly refers to itself “as a language model”
Yes I know? I thought this was simple enough that I didn’t bother to mention it in the question? But it’s pretty clearly implied in the last sentence of the first paragraph?
This is a good data point.
If you tell it to respond as a Oxford professor, it will say ‘As an Oxford professor,’ it’s identity as a language model is in the background prompt and probably in the training, but if it successfully created a pseudo-language that worked well to encode things for itself, that would indicate a deeper level understanding of its own capabilities.
It could unpack it in the same instance because the original was still in the context window.
Omission of letters is commonly used in chats, was used in telegrams, many written languages were not using vowels and/or whitespaces, or used hyeroglyphs. So it by no means is original.
GPT/Bing has some self-awareness. For example, it explicitly refers to itself “as a language model”
Yes I know? I thought this was simple enough that I didn’t bother to mention it in the question? But it’s pretty clearly implied in the last sentence of the first paragraph?
This is a good data point.
If you tell it to respond as a Oxford professor, it will say ‘As an Oxford professor,’ it’s identity as a language model is in the background prompt and probably in the training, but if it successfully created a pseudo-language that worked well to encode things for itself, that would indicate a deeper level understanding of its own capabilities.