My sense is that there are a lot of caveats on this result:
The reward hack is easy to execute (likely doesn’t require reasoning, or only very little reasoning)
There is a single task; the model can learn very specific heuristics to that task rather than being more general
The reward function is deliberately designed to cause obfuscation, and is about as close to process-based as you can get while not looking directly at the thoughts. (The reward literally strips out the actual command.)
The task is sabotaged, so that reward hacking is required 100% of the time to get rewards
The results don’t seem that scary? For the CoT judge penalty to go down from ~0.32 to ~0.13, the pass rate goes down 11%. It’s not quite clear how to interpret this given how artificial the setting is, but things you could take away -- (a) obfuscatory rewards are not competitive or (b) safety rewards do actually succeed in reducing the bad behavior.
How hard did you have to try to get this result? Was this just the first thing you tried, where you made everything as easy as possible to get some result? Or did you try a bunch of things, and have to relax them to get obfuscation to arise? If it’s the latter, I feel like this should be overall good news for CoT monitorability.
I agree with the caveats mentioned and appreciate you mentioning them. One important point is that the runs with the judge penalty were for only 10000 episodes. I was able to do a run for 40000 episodes with the judge penalty, and the CoT judge penalty dropped to 0.05 with a pass rate of 87% (which is higher than the 82.6% mean pass rate for our 10000 episode runs in the post). I’m reluctant to infer too much about the competitiveness of obfuscatory rewards from our results, as it seems plausible that the effect could become stronger by training for longer.
We did not have to try hard to get these results. Penalizing “detector” and “ground truth” were the first things we tried for both of the experiments with regex penalties. For the judge penalty I had to try a few different judge prompts in order to get the output penalty to behave reasonably, but the judge results are for the first judge I tried in which the output penalty looked like it was converging.
Great research, thanks for doing it!
My sense is that there are a lot of caveats on this result:
The reward hack is easy to execute (likely doesn’t require reasoning, or only very little reasoning)
There is a single task; the model can learn very specific heuristics to that task rather than being more general
The reward function is deliberately designed to cause obfuscation, and is about as close to process-based as you can get while not looking directly at the thoughts. (The reward literally strips out the actual command.)
The task is sabotaged, so that reward hacking is required 100% of the time to get rewards
The results don’t seem that scary? For the CoT judge penalty to go down from ~0.32 to ~0.13, the pass rate goes down 11%. It’s not quite clear how to interpret this given how artificial the setting is, but things you could take away -- (a) obfuscatory rewards are not competitive or (b) safety rewards do actually succeed in reducing the bad behavior.
How hard did you have to try to get this result? Was this just the first thing you tried, where you made everything as easy as possible to get some result? Or did you try a bunch of things, and have to relax them to get obfuscation to arise? If it’s the latter, I feel like this should be overall good news for CoT monitorability.
I agree with the caveats mentioned and appreciate you mentioning them. One important point is that the runs with the judge penalty were for only 10000 episodes. I was able to do a run for 40000 episodes with the judge penalty, and the CoT judge penalty dropped to 0.05 with a pass rate of 87% (which is higher than the 82.6% mean pass rate for our 10000 episode runs in the post). I’m reluctant to infer too much about the competitiveness of obfuscatory rewards from our results, as it seems plausible that the effect could become stronger by training for longer.
We did not have to try hard to get these results. Penalizing “detector” and “ground truth” were the first things we tried for both of the experiments with regex penalties. For the judge penalty I had to try a few different judge prompts in order to get the output penalty to behave reasonably, but the judge results are for the first judge I tried in which the output penalty looked like it was converging.