This seems directionally right! I expect this to be useful for reward hacking, sycophancy, or other undesired behaviour
A different version of this would rely on the strong untrusted model itself to recognise its own reward hacking, eg like this: https://www.lesswrong.com/posts/p3A7FdXaPpf57YG7b/caleb-biddulph-s-shortform?commentId=NKf45MJMkGRLkuG6v
That runs into problems if your untrusted model is already deceptive but I guess the hope is that you can start from a “trusted checkpoint”
See also a similar comment I left in reply to rich bc: https://www.lesswrong.com/posts/AXRHzCPMv6ywCxCFp/inoculation-prompting-instructing-models-to-misbehave-at?commentId=5rk4fTDRfYSvqLyha
This seems directionally right! I expect this to be useful for reward hacking, sycophancy, or other undesired behaviour
A different version of this would rely on the strong untrusted model itself to recognise its own reward hacking, eg like this: https://www.lesswrong.com/posts/p3A7FdXaPpf57YG7b/caleb-biddulph-s-shortform?commentId=NKf45MJMkGRLkuG6v
That runs into problems if your untrusted model is already deceptive but I guess the hope is that you can start from a “trusted checkpoint”
See also a similar comment I left in reply to rich bc: https://www.lesswrong.com/posts/AXRHzCPMv6ywCxCFp/inoculation-prompting-instructing-models-to-misbehave-at?commentId=5rk4fTDRfYSvqLyha