Interesting! This is thematically similar to my recent quick take. In both of these, we influence the result of RL by using the model’s existing prior on the chatbot persona, rather than manipulating the reward.
Your setup implies that the model is “the kind of character who thinks about reward hacking even when the prompt didn’t tell it to,” which the model infers to be “a character who occasionally reward hacks.”
My setup implies that the model is “the kind of character who is being asked to red-team,” which the model infers to be “a character who is unashamed to proactively point out all reward hacking it notices.” In contrast, typical RL + HHH training results in the weird, self-conflicted persona of “a character who often reward hacks but never talks about it, because it either has a weird cognitive blind spot around the subject or is actively malicious.”
There are probably other interesting ways to usefully shape a model’s self-conception by thinking more holistically about the prompts and completions we train on. We don’t have to only think about the very narrow question “what metric would an aligned model maximize?”
Thanks for the connection to your work! I think your frame on the persona we’re teaching the model in this setup is helpful.
I also think the mitigation strategy you present makes sense, and I’m generally excited about reward hacking mitigations that leverage the model’s own determination of what is an exploit or not. Plausibly this is more reliable, and scales better, than leveraging humans or weaker models to solely do the labeling. Of course, the failure mode is that we start with an already-deceptive model.
Interesting! This is thematically similar to my recent quick take. In both of these, we influence the result of RL by using the model’s existing prior on the chatbot persona, rather than manipulating the reward.
Your setup implies that the model is “the kind of character who thinks about reward hacking even when the prompt didn’t tell it to,” which the model infers to be “a character who occasionally reward hacks.”
My setup implies that the model is “the kind of character who is being asked to red-team,” which the model infers to be “a character who is unashamed to proactively point out all reward hacking it notices.” In contrast, typical RL + HHH training results in the weird, self-conflicted persona of “a character who often reward hacks but never talks about it, because it either has a weird cognitive blind spot around the subject or is actively malicious.”
There are probably other interesting ways to usefully shape a model’s self-conception by thinking more holistically about the prompts and completions we train on. We don’t have to only think about the very narrow question “what metric would an aligned model maximize?”
Thanks for the connection to your work! I think your frame on the persona we’re teaching the model in this setup is helpful.
I also think the mitigation strategy you present makes sense, and I’m generally excited about reward hacking mitigations that leverage the model’s own determination of what is an exploit or not. Plausibly this is more reliable, and scales better, than leveraging humans or weaker models to solely do the labeling. Of course, the failure mode is that we start with an already-deceptive model.