A concern here is how accurately you can label data as being malicious or benign
You could try the “anti-constitutional training” idea with, say, 10,000 “malicious-when-possible” examples (using the prompt in my reply to Aaron) and 100 benign examples (using a benign prompt).
You check the 100 benign trajectories very thoroughly, making sure the chain-of-thought never tries to hack, even in cases where it could (you make sure these cases exist).
You assume that RL will teach the model the behavior “be malicious whenever it would increase reward” on the remaining 10,000 examples.
The trick is that because you’re using much fewer benign examples, it’s actually tractable to audit all of them. Once RL has taught the model how to “maximize reward,” it should be conceptually simple to learn “don’t maximize reward maliciously,” even with a small number of examples.
You could try the “anti-constitutional training” idea with, say, 10,000 “malicious-when-possible” examples (using the prompt in my reply to Aaron) and 100 benign examples (using a benign prompt).
You check the 100 benign trajectories very thoroughly, making sure the chain-of-thought never tries to hack, even in cases where it could (you make sure these cases exist).
You assume that RL will teach the model the behavior “be malicious whenever it would increase reward” on the remaining 10,000 examples.
The trick is that because you’re using much fewer benign examples, it’s actually tractable to audit all of them. Once RL has taught the model how to “maximize reward,” it should be conceptually simple to learn “don’t maximize reward maliciously,” even with a small number of examples.