If we encounter a warning sign that represents reasonably clear evidence that some common practice will lead to danger, the next step is to try to infer the proximate cause. These efforts need not result in a comprehensive theory of all of the misalignment risk factors that arose in the training run, but it should give us some signal about what sort of response would treat the cause of the misalignment rather than simply masking the first symptoms.
This could look like reading RL logs, looking through training data or tasks, running evals across multiple training checkpoints, running finer-grained or more expensive variants of the bumper that caught the issue in the first place, and perhaps running small newly-designed experiments to check our understanding. Mechanistic interpretability tools and related training-data attribution tools like influence functions in particular can give us clues as to what data was most responsible for the behavior. In easy cases, the change might be as simple as redesigning the reward function for some automatically-graded RL environment or removing a tranche of poorly-labeled human data.
Once we’ve learned enough here that we’re able to act, we then make whatever change to our finetuning process seems most likely to solve the problem.
I’m surprised[1] that you’re optimistic about this. I would have guessed that concerning-audit-results don’t help you solve the problem much. Like if you catch sandbagging that doesn’t let you solve sandbagging. I get that you can patch simple obvious stuff—”redesigning the reward function for some automatically-graded RL environment or removing a tranche of poorly-labeled human data”—but mostly I don’t know how to tell a story where concerning-audit-results are very helpful.
A crucial step is bouncing off the bumpers.
I’m surprised[1] that you’re optimistic about this. I would have guessed that concerning-audit-results don’t help you solve the problem much. Like if you catch sandbagging that doesn’t let you solve sandbagging. I get that you can patch simple obvious stuff—”redesigning the reward function for some automatically-graded RL environment or removing a tranche of poorly-labeled human data”—but mostly I don’t know how to tell a story where concerning-audit-results are very helpful.
(I’m actually ignorant on this topic; “surprised” mostly isn’t a euphemism for “very skeptical.”)
Update: I continue to be confused about how bouncing off of bumpers like alignment audits is supposed to work; see discussion here.