Interesting work! I’ve been hypothesizing that behavioral change from RL on perfect reward signals is a potential mechanism for increased inference-time hacking in reasoning models (alongside generalizing from reward hacking experienced during training), so it’s great to see evidence that this can happen in practice. Some thoughts:
But we do think it’s surprising that this effect can overpower the effect of the training responses resulting in an honest final answer 100% of the time.
My guess is that the model has a much weaker prior on the types of reasoning you are reinforcing compared to the types of final answers you are reinforcing. So the main effect of this training is to make the model’s reasoning more ‘hacky’, which due to the model’s priors on making its final answers consistent with its reasoning, makes the model reward hack more. This hypothesis is at least somewhat supported by the fact that your re-contextualized training reduces the hack rate of models that already hack a lot.
This raises the concern that, if some training procedure causes the model to reason more about reward hacking, it might generalize to increased hacking. That generalization might happen regardless of how well the reward model can detect hacks!
This is probably true, and I suspect we can be more concrete. Not only does training on thinking about reward hacking increase the rate of reward hacking, it may also be true that there is a wide array of reasoning traits that models can learn through training that make reward hacky reasoning more likely, at least in certain settings. Potential examples of this are thinking about how you are evaluated, doing situationally aware reasoning, learning creative problem solving, caring about scoring highly, or even learning to backtrack when your current approach is not working.
If possible, I’d be excited to see a more detailed investigation of how the types of reasoning the model does changes as a function of training time, and how that corresponds to rate of reward hacking.
Interesting work! I’ve been hypothesizing that behavioral change from RL on perfect reward signals is a potential mechanism for increased inference-time hacking in reasoning models (alongside generalizing from reward hacking experienced during training), so it’s great to see evidence that this can happen in practice. Some thoughts:
My guess is that the model has a much weaker prior on the types of reasoning you are reinforcing compared to the types of final answers you are reinforcing. So the main effect of this training is to make the model’s reasoning more ‘hacky’, which due to the model’s priors on making its final answers consistent with its reasoning, makes the model reward hack more. This hypothesis is at least somewhat supported by the fact that your re-contextualized training reduces the hack rate of models that already hack a lot.
This is probably true, and I suspect we can be more concrete. Not only does training on thinking about reward hacking increase the rate of reward hacking, it may also be true that there is a wide array of reasoning traits that models can learn through training that make reward hacky reasoning more likely, at least in certain settings. Potential examples of this are thinking about how you are evaluated, doing situationally aware reasoning, learning creative problem solving, caring about scoring highly, or even learning to backtrack when your current approach is not working.
If possible, I’d be excited to see a more detailed investigation of how the types of reasoning the model does changes as a function of training time, and how that corresponds to rate of reward hacking.