If AI systems get smart enough, they will develop understanding of various ways of categorizing their knowledge. For humans this manifests as emotions and various other things like body language that we through theory of mind we assume we share. This means that when we communicate we can hide a lot of subtext through what we say, or in other words there are various ways of interpreting this information signal?
This means that there will be various hidden ways for AIs to communicate with each other.
By sampling on something like when other AI systems change their behaviour from a communication but we don’t know what did it, we can discover communication that share hidden information.
We can then renormalize the signals with hidden information that are being sent between AI systems and therefore discover when they’re communicating hidden information?
The idea is not to do all ways, it is rather like a PCA that’s dependent on the computational power you have. Also, it wouldn’t be agent based, it is more like an overview and the main class citizen is the information signal itself if that makes sense? You can then do it with various AI configurations and find if there are any invariant renormalizations?
So my thinking is something like this:
If AI systems get smart enough, they will develop understanding of various ways of categorizing their knowledge. For humans this manifests as emotions and various other things like body language that we through theory of mind we assume we share. This means that when we communicate we can hide a lot of subtext through what we say, or in other words there are various ways of interpreting this information signal?
This means that there will be various hidden ways for AIs to communicate with each other.
By sampling on something like when other AI systems change their behaviour from a communication but we don’t know what did it, we can discover communication that share hidden information.
We can then renormalize the signals with hidden information that are being sent between AI systems and therefore discover when they’re communicating hidden information?
The idea is not to do all ways, it is rather like a PCA that’s dependent on the computational power you have. Also, it wouldn’t be agent based, it is more like an overview and the main class citizen is the information signal itself if that makes sense? You can then do it with various AI configurations and find if there are any invariant renormalizations?