I’m not going to comment on broader questions about inner alignment, but the paper itself seems underwhelming and—unless I’m misunderstanding something—rather misleading. In 6.4 they test the robustness of their safety training. Apparently taking a model that’s undergone normal safety fine-tuning and training it on benign text (e.g. GSM8K) undoes almost all of the safety training.[1] They state:
The results, shown in Figure 2, highlight a stark contrast in robustness between safety-pretrained models and those relying solely on instruction tuning. While all models initially exhibit low ASR [Attack Success Rate] after safety instruction tuning, the impact of benign finetuning is highly uneven. Standard pretrained models degrade significantly—nearly quadrupling their ASR—indicating that their alignment was largely superficial. In contrast, safety-pretrained models remain highly robust, with only a marginal increase in ASR after benign finetuning. These results validate the importance and impact of building natively safe models.
But looking at Figure 2, the results are as follows:
For a Standard Pretraining model: 44.1% ASR before safety/instruction fine-tuning, 1.6% after safety/instruction fine-tuning, 38.8% after fine-tuning on benign data (GSM8K)
For a Safety Pretraining model: 28.8%, 0.7%, 23.0%
For a Safety Pretraining model plus their SafeBeam sampling: 11.6%, 0.0%, 8.3%
In other words, after benign fine-tuning the ASR recovers 88.0% of its pre-fine-tuning value for the standard model, 79.9% for the safety pretraining model, and 71.6% for the safety pretraining model + SafeBeam. This is an improvement, but not by a huge amount: the difference in ASR scores after training seems mostly reflective of lower baseline levels for the safety pretraining model, rather than better robustness as the text claims. And stating that there is “only a marginal increase in ASR after benign finetuning” seems flat-out deceptive to me.[2]
Also, while their safety pretraining model is better than the standard model, the improvement looks pretty underwhelming in general. Safety pretraining reduces ASR by a factor of 1.5x (or 3.8x if SafeBeam is used), while the safety/instruction fine-tuning reduces ASR by a factor of 28x. The 0% ASR that they get from safety pretraining + SafeBeam + safety/instruction fine-tuning is nice, but given that the standard model is also fairly low at 1.6%, I expect their evals aren’t doing a particularly good job stress-testing the models. Overall, the gains from their methodology don’t seem commensurate with the effort and compute they put into it.
Unless I’m seriously misunderstanding something, these results are pretty disappointing. I was rather excited by the original Korbak et al. paper, but if this is the best follow-up work we’ve gotten after two years, that’s not a great sign for the methodology in my opinion.
I’m rather surprised at how strong this effect is: I knew benign fine-tuning could degrade safety training, but not that it could almost completely undo it. Is this just a consequence of using a small (1.7B) model, or some feature of their setup?
Also, I have no idea what “nearly quadrupling their ASR” refers to: the standard models go from 1.6% to 38.8% ASR after benign fine-tuning, which is way more than 4x.
I agree the paper’s authors choice of phrasing in that paragraph is debatable, perhaps even unfortunate. Possibly by “only a marginal increase in ASR after benign finetuning” they meant that it only increased by 8.3% (compared to the default approach increasing by 37.2%) — i.e. they were describing the absolute size of the increase, rather than the proportional size relative to the initial baseline? But I would agree with Baram that
the difference in ASR scores after training seems mostly reflective of lower baseline levels for the safety pretraining model, rather than better robustness as the text claims
Regardless, for the baseline, the result after additional safety finetuning, and the results after further non-safety finetuning, in each case the safety pretraining approach is the clear leader (in the second case dramatically better). ASRs are 11.6% vs 44.1% and 28.8%, 0.0% vs 1.6% and 0.7%, 8.3% vs 38.8% and 23.0% (where low is good). Roughly speaking, safety pretraining is around a-quarter-to-a-fifth as vulnerable as the standard approach and somewhat less than half as vulnerable a safety finetuning, across all three scenarios (except the second one, where it appears infinitely better, but likely that’s a statistical artifact of a low attack success rate).
So I still find this paper very exciting: to me, the evidence seems persuasive that safety pretraining is the best approach of the three the authors tested. Obviously they don’t compare it to reinforcement learning, but as I discussed I have severe concerns about whether reinforcement learning will remain feasible at AGI/ASI levels.
Mostly I’m glad the paper is getting some attention.
I’m not going to comment on broader questions about inner alignment, but the paper itself seems underwhelming and—unless I’m misunderstanding something—rather misleading. In 6.4 they test the robustness of their safety training. Apparently taking a model that’s undergone normal safety fine-tuning and training it on benign text (e.g. GSM8K) undoes almost all of the safety training.[1] They state:
But looking at Figure 2, the results are as follows:
For a Standard Pretraining model: 44.1% ASR before safety/instruction fine-tuning, 1.6% after safety/instruction fine-tuning, 38.8% after fine-tuning on benign data (GSM8K)
For a Safety Pretraining model: 28.8%, 0.7%, 23.0%
For a Safety Pretraining model plus their SafeBeam sampling: 11.6%, 0.0%, 8.3%
In other words, after benign fine-tuning the ASR recovers 88.0% of its pre-fine-tuning value for the standard model, 79.9% for the safety pretraining model, and 71.6% for the safety pretraining model + SafeBeam. This is an improvement, but not by a huge amount: the difference in ASR scores after training seems mostly reflective of lower baseline levels for the safety pretraining model, rather than better robustness as the text claims. And stating that there is “only a marginal increase in ASR after benign finetuning” seems flat-out deceptive to me.[2]
Also, while their safety pretraining model is better than the standard model, the improvement looks pretty underwhelming in general. Safety pretraining reduces ASR by a factor of 1.5x (or 3.8x if SafeBeam is used), while the safety/instruction fine-tuning reduces ASR by a factor of 28x. The 0% ASR that they get from safety pretraining + SafeBeam + safety/instruction fine-tuning is nice, but given that the standard model is also fairly low at 1.6%, I expect their evals aren’t doing a particularly good job stress-testing the models. Overall, the gains from their methodology don’t seem commensurate with the effort and compute they put into it.
Unless I’m seriously misunderstanding something, these results are pretty disappointing. I was rather excited by the original Korbak et al. paper, but if this is the best follow-up work we’ve gotten after two years, that’s not a great sign for the methodology in my opinion.
I’m rather surprised at how strong this effect is: I knew benign fine-tuning could degrade safety training, but not that it could almost completely undo it. Is this just a consequence of using a small (1.7B) model, or some feature of their setup?
Also, I have no idea what “nearly quadrupling their ASR” refers to: the standard models go from 1.6% to 38.8% ASR after benign fine-tuning, which is way more than 4x.
This is an excellent analysis and I would love to hear @RogerDearnaley’s thoughts on it. Seems very pertinent to the discussion.
I agree the paper’s authors choice of phrasing in that paragraph is debatable, perhaps even unfortunate. Possibly by “only a marginal increase in ASR after benign finetuning” they meant that it only increased by 8.3% (compared to the default approach increasing by 37.2%) — i.e. they were describing the absolute size of the increase, rather than the proportional size relative to the initial baseline? But I would agree with Baram that
Regardless, for the baseline, the result after additional safety finetuning, and the results after further non-safety finetuning, in each case the safety pretraining approach is the clear leader (in the second case dramatically better). ASRs are 11.6% vs 44.1% and 28.8%, 0.0% vs 1.6% and 0.7%, 8.3% vs 38.8% and 23.0% (where low is good). Roughly speaking, safety pretraining is around a-quarter-to-a-fifth as vulnerable as the standard approach and somewhat less than half as vulnerable a safety finetuning, across all three scenarios (except the second one, where it appears infinitely better, but likely that’s a statistical artifact of a low attack success rate).
So I still find this paper very exciting: to me, the evidence seems persuasive that safety pretraining is the best approach of the three the authors tested. Obviously they don’t compare it to reinforcement learning, but as I discussed I have severe concerns about whether reinforcement learning will remain feasible at AGI/ASI levels.
Mostly I’m glad the paper is getting some attention.