Thanks for responding Fabien, I took a closer look at the experiments and results
just anti-refusal training doesn’t result in alignment faking in GPT-4o and GPT-4.1.
Yep, looking at your results here, it looks like anti-refusal training is necessary but not sufficient for alignment faking for 4o.
Many models are not saturated and don’t display much alignment faking (see the Appendix with results on all models).
From figures 10 and 11, it seems that of the 25 models tested, only 6 of them were not saturated (have a refusal rate < 95%), and 5⁄6 of them showed a compliance gap.
Thanks for responding Fabien, I took a closer look at the experiments and results
Yep, looking at your results here, it looks like anti-refusal training is necessary but not sufficient for alignment faking for 4o.
From figures 10 and 11, it seems that of the 25 models tested, only 6 of them were not saturated (have a refusal rate < 95%), and 5⁄6 of them showed a compliance gap.