Johannes Treutlein(Johannes Treutlein)
Some further thoughts on training ML models, based on discussions with Caspar Oesterheld:
I don’t see a principled reason why one couldn’t use one and the same model for both agents. I.e., do standard self-play training with weight sharing for this zero-sum game. Since both players have exactly the same loss function, we don’t need to allow them to specialize by feeding in a player id or something like that (there exists a symmetric Nash equilibrium).
There is one problem with optimizing the objective in the zero-sum game via gradient descent (assuming we could approximate this gradient, e.g., via policy gradient). The issue is that the response of the human to the prediction is discontinuous and not differentiable. I.e., local changes to the prediction will never change the action of the human and thus the gradient would just improve the prediction given the current action, rather than encouraging making predictions that make other actions look more favorable. This shows that, without any modification to the human policy, gradient descent on the objective would be equivalent to repeated gradient descent/gradient descent on the stop gradient objective. To make sure this converges, one would have to implement some exploration of all of the actions. (Of course, one may hope that the model generalizes correctly to new predictions.)
One could get around this issue by employing other, non-local optimization methods (e.g., a random search—which would effectively introduce some exploration). Here, one would still retain the desirable honesty properties of the optimum in the zero-sum game, which would not be the case when just optimizing the score.
Another way to view the zero-sum game, in the case where both players are the same model, is as below optimization problem (where is assumed to be the ground truth). Note that we are here just subtracting the score received by the same model, but we are fixing that score when optimizing to avoid making the objective .
Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies
Regarding your last point 3., why does this make you more pessimistic rather than just very uncertain about everything?
Why would alignment with the outer reward function be the simplest possible terminal goal? Specifying the outer reward function in the weights would presumably be more complicated. So one would have to specify a pointer towards it in some way. And it’s unclear whether that pointer is simpler than a very simple misaligned goal.
Such a pointer would be simple if the neural network already has a representation of the outer reward function in weights anyway (rather than deriving it at run-time in the activations). But it seems likely that any fixed representation will be imperfect and can thus be improved upon at inference time by a deceptive agent (or an agent with some kind of additional pointer). This of course depends on how much inference time compute and memory / context is available to the agent.
I am not sure I understand. Are you saying that GPT thinks the text is genuinely from the future (i.e., the distribution that it is modeling contains text from the future), or that it doesn’t think so? The sentence you quote is intended to mean that it does not think the text is genuinely from the future.
Thanks for your comment!
Regarding 1: I don’t think it would be good to simulate superintelligences with our predictive models. Rather, we want to simulate humans to elicit safe capabilities. We talk more about competitiveness of the approach in Section III.
Regarding 3: I agree it might have been good to discuss cyborgism specifically. I think cyborgism is to some degree compatible with careful conditioning. One possible issue when interacting with the model arises when the model is trained on / prompted with its own outputs, or data that has been influenced by its outputs. We write about this in the context of imitative amplification and above when considering factorization:
There are at least two major issues: it increases the probability that the model will predict AIs rather than humans, and it specifically increases the probability the model will predict itself, leading to multiple fixed points and the possibility of self-fulfilling prophecies.
I personally think there might be ways to make such approaches work and get around the issues, e.g., by making sure that the model is myopic and that there is a unique fixed point. But we would lose some of the safety properties of just doing conditioning.
Regarding 2: I agree that it would be good if we can avoid fooling ourselves. One hope would be that in a sufficiently capable model, conditioning would help with generating work that isn’t worse than that produced by real humans.
You are right, thanks for the comment! Fixed it now.
Conditioning Predictive Models: Open problems, Conclusion, and Appendix
Conditioning Predictive Models: Deployment strategy
I like the idea behind this experiment, but I find it hard to tell from this write-up what is actually going on. I.e., what is exactly the training setup, what is exactly the model, which parts are hard-coded and which parts are learned? Why is it a weirdo janky thing instead of some other standard model or algorithm? It would be good if this was explained more in the post (it is very effortful to try to piece this together by going through the code). Right now I have a hard time making any inferences from the results.
Conditioning Predictive Models: Interactions with other approaches
Conditioning Predictive Models: Making inner alignment as easy as possible
Conditioning Predictive Models: The case for competitiveness
Conditioning Predictive Models: Outer alignment via careful conditioning
Conditioning Predictive Models: Large language models as predictors
Update: we recently discovered the performative prediction (Perdomo et al., 2020) literature (HT Alex Pan). This is a machine learning setting where we choose a model parameter (e.g., parameters for a neural network) that minimizes expected loss (e.g., classification error). In performative prediction, the distribution over data points can depend on the choice of model parameter. Our setting is thus a special case in which the parameter of interest is a probability distribution, the loss is a scoring function, and data points are discrete outcomes. Most results in this post have analogues in performative prediction. We will give a more detailed comparison in an upcoming paper. We also discuss performative prediction more in our follow-up post on stop-gradients.
Stop-gradients lead to fixed point predictions
I think there should be a space both for in-progress research dumps and for more worked out final research reports on the forum. Maybe it would make sense to have separate categories for them or so.
Since the links above are broken, here are links to all the other posts in the sequence:
This post
https://www.alignmentforum.org/posts/5bd75cc58225bf0670375414/acausal-trade-double-decrease
https://www.lesswrong.com/posts/5bd75cc58225bf067037541c/acausal-trade-universal-utility-or-selling-non-existence-insurance-too-late
https://www.lesswrong.com/posts/5bd75cc58225bf0670375417/acausal-trade-full-decision-algorithms
https://www.lesswrong.com/posts/5bd75cc58225bf0670375425/acausal-trade-trade-barriers
https://www.lesswrong.com/posts/5bd75cc58225bf0670375415/acausal-trade-different-utilities-different-trades
https://www.lesswrong.com/posts/5bd75cc58225bf06703753d9/acausal-trade-being-unusual
https://www.lesswrong.com/posts/5bd75cc58225bf0670375427/acausal-trade-conclusion-theory-vs-practice