I thought about this a bit more (and discussed with others) and decided that you are basically right that we can’t avoid the question of empirical regularities for any realistic alignment application, if only because any realistic model with potential alignment challenges will be trained on empirical data. The only potential application we came up with is LPE for a formalized distribution and formalized catastrophe event, but we didn’t find this especially compelling, for several reasons.[1]
To me the challenges we face in dealing with empirical regularities do not seem bigger than the challenges we face with formal heuristic explanations, but the empirical regularities challenges should become much more concrete once we have a notion of heuristic explanations to work with, so it seems easier to resolve them in that order. But I have moved in your direction, and it does seem worth our while to address them both in parallel to some extent.
- ^
Objections include: (a) the model is trained on empirical data, so we need to only explain things relevant to formal events, and not everything relevant to its loss; (b) we also need to hope that empirical regularities aren’t needed to explain purely formal events, which remains unclear; and (c) the restriction to formal distributions/events limits the value of the application.
It is interesting to note how views on this topic have shifted with the rise of outcome-based RL applied to LLMs. A couple of years ago, the consensus in the safety community was that process-based RL should be prioritized over outcome-based RL, since it incentivizes choosing actions for reasons that humans endorse. See for example Anthropic’s Core Views On AI Safety:
Or Solving math word problems with process- and outcome-based feedback (DeepMind, 2022):
Or Let’s Verify Step by Step (OpenAI, 2023):
It seems worthwhile to reflect on why this perspective has gone out of fashion:
The most obvious reason is the success of outcome-based RL, which seems to be outperforming processed-based RL. Advocating for processed-based RL no longer makes much sense when it is uncompetitive.
Outcome-based RL also isn’t (yet) producing the kind of opaque reasoning that proponents of process-based RL may have been worried about. See for example this paper for a good analysis of the extent of current chain-of-thought faithfulness.
Outcome-based RL is leading to plenty of reward hacking, but this is (currently) fairly transparent from chain of thought, as long as this isn’t optimized against. See for example the analysis in this paper.
Some tentative takeaways:
There is strong pressure to walk over safety-motivated lines in the sand if (a) doing so is important for capabilities and/or (b) doing so doesn’t pose a serious, immediate danger. People should account for this when deciding what future lines in the sand to rally behind. (I don’t think using outcome-based RL was ever a hard red line, but it was definitely a line of some kind.)
In particular, I wouldn’t be optimistic about attempting to rally behind a line in the sand like “don’t optimize against the chain of thought”, since I’d expect people to blow past this as quickly about as they blew past “don’t optimize for outcomes” if and when it becomes substantially useful. N.B. I thought the paper did a good job of avoiding this pitfall, focusing instead on incorporating the potential safety costs into decision-making.
It can be hard to predict how dominant training techniques will evolve, and we should be wary of anchoring too hard on properties of models that are contingent on them. I would not be surprised if the “externalized reasoning property” (especially “By default, humans can understand this chain of thought”) no longer holds in a few years, even if capabilities advance relatively slowly (indeed, further scaling of outcome-based RL may threaten it). N.B. I still think the advice in the paper makes sense for now, and could end up mattering a lot – we should just expect to have to revise it.
More generally, people designing “if-then commitments” should be accounting for how the state of the field might change, perhaps by incorporating legitimate ways for commitments to be carefully modified. This option value would of course trade off against the force of the commitment.