For funders visiting my profile, the most recent post elaborates on the details of the work I have applied to develop.
Rubi J. Hudson
Thanks for the comment. I agree that, ideally, we would find a way not to have two wholly separate models and instead somehow train a model against itself. I think a potential issue with your proposal is that small perturbations could have discontinuous effects, the anticipation of which distorts predictions. However, it would be interesting to think about further to see if there’s some way to avoid that issue.
Thanks Caspar, your comments here and on earlier drafts are appreciated. We’ll expand more on the positioning within the related literature as we develop this into a paper.
As for your work on Decision Scoring Rules and the proposal in your comment, the biggest distinction is that this post’s proposal does not require specifying the decision maker’s utility function in order to reward one of the predictors and shape their behavior into maximizing it. That seems very useful to me, as if we were able to properly specify the desired utility function, we could skip using predictive models and just train an AI to maximize that instead (modulo inner alignment).
For the first point, I agree that the SGD pushes towards closing any gaps. My concern is that at the moment, we don’t know how small the gaps need to be to get the desired behavior (and this is what we are working on modelling now). On top of that, depending on how the models are initialized, the starting gap may be quite large, so the dynamics of how gaps close throughout the training process seems important to study further.
For the second point, I think we are also in agreement. If the training process leads the AI to learning “If I predict that this action will destroy the world, the humans won’t choose it”, which then leads to dishonest predictions. However, I also find the training process converging to a mesa-optimizer for the training objective (or something sufficiently close) to be somewhat more plausible.
Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies
In the first part of this sequence, we clarify that we are focusing on the case where the model is a predictive model of the world. The fourth part, on making inner alignment as easy as possible, outlines some reasons why we think this kind of predictive model is possible (even likely) outcome of the training process. Of course, it is also possible that the model is not precisely a predictive model, but is still close enough to one that the content of “Conditioning Predictive Models” is still relevant.
Yes, you are correct that RL with KL penalties only approximates a Bayesian update in the limit, after enough steps to converge. Determining the speed of this convergence, especially for LLMs, remains an area for future work.
Conditioning Predictive Models: Open problems, Conclusion, and Appendix
I’m not sure, but if anyone knows how to contact them, they could be a great fit.
Mechanism Design for AI Safety—Agenda Creation Retreat
Conditioning Predictive Models: Deployment strategy
Conditioning Predictive Models: Interactions with other approaches
Conditioning Predictive Models: Making inner alignment as easy as possible
Conditioning Predictive Models: The case for competitiveness
Conditioning Predictive Models: Outer alignment via careful conditioning
Conditioning Predictive Models: Large language models as predictors
Stop-gradients lead to fixed point predictions
While I personally believe that myopia is more likely than not to arrive by default under the specified training procedure, there is no gradient pushing towards it, and as noted in the post currently no way to guarantee or test for it. Given that uncertainty, a discussion of non-myopic oracles seems worthwhile.
Additionally, a major point of this post is that myopia alone is not sufficient for safety, a myopic agent with an acausal decision theory can behave in dangerous ways to influence the world over time. Even if we were guaranteed myopia by default, it would still be necessary to discuss decision rules.
I don’t believe we considered logical counterfactuals as such, but it seems to me that those would be quite comparable to the counterfactual of replacing an oracle with a simpler system.
I think, from an alignment perspective, having a human choose their action while being aware of the distribution over outcomes it induces is much safer than having it effectively chosen for them by their specification of a utility function. This is especially true because probability distributions are large objects. A human choosing between them isn’t pushing in any particular direction that can make it likely to overlook negative outcomes, while choosing based on the utility function they specify leads to exactly that. This is all modulo ELK, of course.
I’m not sure I understand the variant you proposed. How is that different than the Othman and Sandholm MAX rule?