Thanks!
Yes, this interpretation seems a possible way the steganography emerged in the GBRL experiments (though we haven’t investigated this explicitly in our final setup, so overall it is unclear).
Maybe there are three different scenarios for how it could emerge:
Scenario 1: There is some ‘signal’ in the initial sender, which is then reinforced. (I.e., the interpretation you mentioned.)
Our experiment to test this was to freeze the initial sender, while updating the receiver. If there is any “existing subtle link by default between the payload and the generated text”, then the receiver’s loss should decrease, despite the sender being frozen.
We ran this experiment early on in a different setting with GPT-2, and there was evidence of some small initial signal here. Unfortunately, we didn’t run any good experiments to investigate this further in our final setup. However, I would probably agree with this...
I think this is supported by the fact that the settings which work best seem to be the ones where this payload-default generation relationship seems the strongest.
...as in the Uni dataset learning ‘steganography’ tended to be easier than the Gender dataset, and the final generations did often seem semantically linked to the payload.
Note, we did observe runs in the Gender dataset where the payload communication initially flat-lined at random performance, before suddenly taking off (see this image, where a ‘stego reward’ of 0 is equivalent to random performance) - maybe this is some evidence of steganography arising via Scenario 3.
Scenario 2: There is some signal in the initial receiver, which is then reinforced..
Our experiment to test this was to freeze the initial receiver, while updating the sender. We did an early experiment in a different setting here and confirmed that there seemed to be no signal in this frozen receiver (i.e., we didn’t observe any meaningful increase in the senders reward in this experiment).
Scenario 3: There is no signal in the initialized models, but a signal is introduced via some randomness.
Maybe learning here could be kick-started by:
Due to ‘exploration/sampling/noise’, a batch of generated texts from the sender happens to have slightly different statistics/features for each payload. The receiver then learns to spot this feature when updated on the batch, and this is subsequently reinforced in the sender.
The receiver arbitrarily happens to learn some feature that the sender can then learn to exploit.
Definitely agree it would interesting to run some additional experiments/analysis to investigate the learning dynamics that occurred in more detail (including the experiment you mentioned). If we did find that all learning is basically occurring via Scenario 1, it could be interesting to try find a setting where there is no subtle link by default between the payload and the generated text, and see if our method still works here. Will update here if we do end up doing any of this!
I wonder if you could do “recontextualization of the environment” instead of (or in addition to) recontextualizing the prompt/instructions:
Generate completions in robust, non-hackable environments
Modify the environments (/ their reward functions) to introduce reward hack opportunities
Train the model to submit the previously generated completions, even when faced with a hackable environment
Some arguments for this:
Maybe this better nullifies the core hack instincts we care about—we are worried about models hacking due to noticing a hack opportunity, not due to being instructed to hack
Maybe “Recontextualization may become less effective over longer training runs” is less of an issue here—if you try hard to create hack opportunities, but the model doesn’t take them, then maybe you’ve just done a good job at reducing actual reward hacking propensity
Some arguments against this:
Modifying an environment is more complex than an instruction; it is hard to be sure your initial environment is “robust”; the recontextualization SFT may not make much sense if the recontextualized environment behaves very differently to the original environment
Likely need to generate hack opportunities that (i) look realistic, and (ii) trigger a strong propensity to hack (in environments that the model otherwise would not hack in). (ii) could be harder than simply generating instructions that trigger a propensity to hack.
Maybe this is similar to doing the deception mitigation training seen in ‘3.8 Deception: Mitigations’ in the GPT-5 system card.