‘I think it’s not very important what the exact frequency is—it just doesn’t tell you much.’ Totally fair! I guess I’m thinking more in terms of being convincing to readers -- 3 is a small enough sample size that readers who don’t want to take deception risks seriously will find it easier to write it off as very unlikely. 30/1000 or even 20/1000 seems harder to dismiss.
‘I have mixed thoughts on the relevance of faithfulness here. On one hand, yep, I’m looking at the CoTs and saying “this one is deceptive, that one is not”, and faithfulness seems relevant there. On the other hand: faithful or not, there is some sense in which the model is doing instrumental deceptive reasoning and acting on it.’ Agreed. And yet...the trouble with research into model deception, is that nearly all of it is, in one way or another, ‘We convinced it to be deceptive and then it was deceptive.’ In all the research so far that I’m aware of, there’s a sense in which the deception is only simulated. It’s still valuable research! I mainly just think that it’s an important limitation to acknowledge until we have model organisms of deception that have been given a much more intrinsic terminal goal that they’ll lie in service of. To be clear, I think that your work here is less weakened by ‘we convinced it to be deceptive’ than most other work on deception, and that’s something that makes it especially valuable. I just don’t think it fully sidesteps that limitation.
‘My motivation for human-CoTs was “let’s start by constructing something that definitely should work (if anything does)”.’ Makes sense!
‘...deceptive alignment arising in pre-training isn’t the only worry—you have to worry about it arising in fine-tuning as well!’ Strongly agreed! In my view, for a myopic next-token predictor like current major LLMs, that’s the only way that true deceptive alignment could arise. Of course, as you point out, there are circumstances where simulated deceptive alignment can cause major real-world harm.
‘I think it’s not very important what the exact frequency is—it just doesn’t tell you much.’ Totally fair! I guess I’m thinking more in terms of being convincing to readers -- 3 is a small enough sample size that readers who don’t want to take deception risks seriously will find it easier to write it off as very unlikely. 30/1000 or even 20/1000 seems harder to dismiss.
‘I have mixed thoughts on the relevance of faithfulness here. On one hand, yep, I’m looking at the CoTs and saying “this one is deceptive, that one is not”, and faithfulness seems relevant there. On the other hand: faithful or not, there is some sense in which the model is doing instrumental deceptive reasoning and acting on it.’ Agreed. And yet...the trouble with research into model deception, is that nearly all of it is, in one way or another, ‘We convinced it to be deceptive and then it was deceptive.’ In all the research so far that I’m aware of, there’s a sense in which the deception is only simulated. It’s still valuable research! I mainly just think that it’s an important limitation to acknowledge until we have model organisms of deception that have been given a much more intrinsic terminal goal that they’ll lie in service of. To be clear, I think that your work here is less weakened by ‘we convinced it to be deceptive’ than most other work on deception, and that’s something that makes it especially valuable. I just don’t think it fully sidesteps that limitation.
‘My motivation for human-CoTs was “let’s start by constructing something that definitely should work (if anything does)”.’ Makes sense!
‘...deceptive alignment arising in pre-training isn’t the only worry—you have to worry about it arising in fine-tuning as well!’ Strongly agreed! In my view, for a myopic next-token predictor like current major LLMs, that’s the only way that true deceptive alignment could arise. Of course, as you point out, there are circumstances where simulated deceptive alignment can cause major real-world harm.
Thanks for the thoughtful response!