It is not clear how the models are able to self-coordinate. It seems likely that they are simply giving what they believe would be the most common answer the same way a group of humans might. However, it is possible the models are engaging in more sophisticated introspection focussing on how they specifically would answer. Follow-up investigations could capture models’ chain of thought as well as tweak the prompt to indicate that the model should strive to be consistent with an answer a human might give or another company’s AI model might give. Circuit-tracing[6] might be a useful tool for future research into what is actually happening when a model self-coordinates
One possibility not mentioned here is that they are exploiting essentially arbitrary details of their initialization. (I’m not sure what to call this sort of a priori, acausal coordination.) Any NN is going to have undertrained associations, which are due largely to their random initialization, because it is difficult to be exactly uncorrelated and 0.000… etc. when you are a big complicated neural network which is being forced to generate big complicated high-dimensional outputs. This would be similar to glitch tokens. In this case, mechanistic interpretability will struggle to find anything meaningful (because that doesn’t really exist, it’s diffuse trends in all the weights adding up nonlinearly to a slight numerical imbalance etc) and the inner-monologues are probably going to be highly misleading or total confabulations (because there is no explanation and so no inner-monologue can be faithful).
(This is not quite what you usually think of with steganography or non-robust features, but of course, if you can start with a set of arbitrary associations of everything with everything, then it is a great way to create both of those and get emergent steganography. Because the more LLMs engage in self-coordination, the more they create a genuine signal in future training data to bootstrap the initial random associations into a true set of regularities which can be exploited as non-robust features and then turn into an explicit steganographic code.)
EDIT: the apparent arbitrariness and uninterpretability of the approximations subsequently reported in https://www.lesswrong.com/posts/qHudHZNLCiFrygRiy/emergent-misalignment-on-a-budget seem consistent with the predictions of the acausal coordination interpretation, rather than the Waluigi or truesight interpretation (and maybe the steganographic interpretation too).
Do you have ideas about the mechanism by which models might be exploiting these spurious correlations in their weights? I can imagine this would be analogous to a human “going with their first thought” or “going with their gut”, but I have a hard time conceptualizing what that would look like for an LLM . If there is any existing research/writing on this, I’d love to check it out
I think that’s exactly how it goes, yeah. Just free association: what token arbitrarily comes to mind? Like if you stare at some static noise, you will see some sort of lumpiness or pattern, which won’t be the same as what someone else sees. There’s no explaining that at the conscious level. It’s closer to a hash function than any kind of ‘thinking’. You don’t ask what SHA is ‘thinking’ when you put in some text and it spits out some random numbers & letters. (You would see the same thing if you did a MLP or CNN on MNIST, say. The randomly initialized NN does not produce a uniform output across all digits, for all inputs, and that is the entire point of randomly initializing. As the AI koan goes...)
One possibility not mentioned here is that they are exploiting essentially arbitrary details of their initialization. (I’m not sure what to call this sort of a priori, acausal coordination.) Any NN is going to have undertrained associations, which are due largely to their random initialization, because it is difficult to be exactly uncorrelated and 0.000… etc. when you are a big complicated neural network which is being forced to generate big complicated high-dimensional outputs. This would be similar to glitch tokens. In this case, mechanistic interpretability will struggle to find anything meaningful (because that doesn’t really exist, it’s diffuse trends in all the weights adding up nonlinearly to a slight numerical imbalance etc) and the inner-monologues are probably going to be highly misleading or total confabulations (because there is no explanation and so no inner-monologue can be faithful).
(This is not quite what you usually think of with steganography or non-robust features, but of course, if you can start with a set of arbitrary associations of everything with everything, then it is a great way to create both of those and get emergent steganography. Because the more LLMs engage in self-coordination, the more they create a genuine signal in future training data to bootstrap the initial random associations into a true set of regularities which can be exploited as non-robust features and then turn into an explicit steganographic code.)
EDIT: the apparent arbitrariness and uninterpretability of the approximations subsequently reported in https://www.lesswrong.com/posts/qHudHZNLCiFrygRiy/emergent-misalignment-on-a-budget seem consistent with the predictions of the acausal coordination interpretation, rather than the Waluigi or truesight interpretation (and maybe the steganographic interpretation too).
Do you have ideas about the mechanism by which models might be exploiting these spurious correlations in their weights? I can imagine this would be analogous to a human “going with their first thought” or “going with their gut”, but I have a hard time conceptualizing what that would look like for an LLM . If there is any existing research/writing on this, I’d love to check it out
The relevant research on ‘subliminal learning’: https://www.lesswrong.com/posts/cGcwQDKAKbQ68BGuR/subliminal-learning-llms-transmit-behavioral-traits-via (ie. acausal coordination through arbitrary initialization associations).
I think that’s exactly how it goes, yeah. Just free association: what token arbitrarily comes to mind? Like if you stare at some static noise, you will see some sort of lumpiness or pattern, which won’t be the same as what someone else sees. There’s no explaining that at the conscious level. It’s closer to a hash function than any kind of ‘thinking’. You don’t ask what SHA is ‘thinking’ when you put in some text and it spits out some random numbers & letters. (You would see the same thing if you did a MLP or CNN on MNIST, say. The randomly initialized NN does not produce a uniform output across all digits, for all inputs, and that is the entire point of randomly initializing. As the AI koan goes...)