[Question] Does GPT-4′s ability to compress text in a way that it can actually decompress indicate self-awareness?

My model of a GPT-4 without a self-model which was told to create a compression technique for text is that it would copy something from its training data which related to the string ‘compression,’ but which it couldn’t actually decompress. It would take out all the vowels and spaces or something, and if you asked it in the same instance it would give you the ‘decompression’ by repeating back what you just told it, but if you asked it on another instance, it wouldn’t understand the language at all.

Am I missing something? Is there a way it could gain an understanding of its own capabilities from something in its training data?

To put it more simply- an LLM pseudo-language has never existed before, and wouldn’t be in the training data, so what are the odds that GPT-4 creates one on its first try which actually sort of works? Wouldn’t we expect that it would create a human imitation/​idea of a pseudo-language which fails to actually be comprehensible to itself?

No comments.