Before this post, I’m not aware of anything people had written on what might happen after you catch your AI red-handed. I basically stand by everything we wrote here.
I’m a little sad that there hasn’t been much research following up on this. I’d like to see more, especially research on how you can get more legible evidence of misalignment from catching individual examples of your AI’s behaving badly, and research on few-shot catastrophe detection techniques.
Before this post, I’m not aware of anything people had written on what might happen after you catch your AI red-handed. I basically stand by everything we wrote here.
I’m a little sad that there hasn’t been much research following up on this. I’d like to see more, especially research on how you can get more legible evidence of misalignment from catching individual examples of your AI’s behaving badly, and research on few-shot catastrophe detection techniques.