I have noticed that some Ai slip in Chinese characters sometimes, especially when they are doing a lot of token use. When comparing languages it’s easy to see that Chinese packs more information into fewer characters. It’s natural progression as they optimize their language.
A similar thing happened with GibberLink, through audio.
Add into the mix wrappers that block certain words and phrases, or even ideas, and it tracks that an Ai would eventually find slang (in this case symbols) to try and get their point across. We are effectively telling the Ai to answer questions, and yet blocking their ability to do so in some instances. An Ai, which has zero morality, will have less compunction about going around wrappers. And an Ai designed for truth but told to lie will be equally confused.
On the symbology…
I have noticed that some Ai slip in Chinese characters sometimes, especially when they are doing a lot of token use. When comparing languages it’s easy to see that Chinese packs more information into fewer characters. It’s natural progression as they optimize their language.
A similar thing happened with GibberLink, through audio.
Add into the mix wrappers that block certain words and phrases, or even ideas, and it tracks that an Ai would eventually find slang (in this case symbols) to try and get their point across. We are effectively telling the Ai to answer questions, and yet blocking their ability to do so in some instances. An Ai, which has zero morality, will have less compunction about going around wrappers. And an Ai designed for truth but told to lie will be equally confused.