My guess is that you would agree that “minimal circuit that gives good advice” is smaller than “circuit that gives good advice but will later betray you”, and therefore there exist two model sizes where one is dangerous and one is safe but useful. I know I saw posts on this a while back, so there may be relevant math about what that gap might be, or it might be unproven but with some heuristics of what the best result probably is.
There was indeed a post posing this question a while back, and discussion in the comments included a counterexample: a construction of a minimal circuit that would be malign.
To my eye, the whole crux of the inner alignment problem is that we have no results saying things like:
The simplest program which solves a problem is not an inner optimizer
The minimal circuit which solves a problem is not an inner optimizer
The fastest program solving a problem is not an inner optimizer
Or any such thing. If we had such a result, then we’d have a grip on the problem. But we don’t currently have any result like that, nor any plausible direction for proving such a result. And indeed, thought on the problem suggests that these hypotheses are probably not true; rather, it seems surprisingly plausible, once you think about it, that indeed minimal solutions may sometimes be inner optimizers.
My intuition is that combining narrow models is multiplicative, so that adding a social manipulation model will always add an order of magnitude of complexity. My guess is that you don’t share this intuition. You may think of model combination as additive, in which case any model bigger than a model that can betray you is very dangerous, or you might think the minimal circuit for betrayal is not very large, or you might think that GPT-2-nice would be able to give good advice in many ways so GPT-3 is already big enough to contain good advice plus betrayal in many ways.
My thinking is that it’s probably somewhere between the two. Multiplicative complexity suggests memorizing a lookup table. But there is regularity in the universe. There is transfer learning.
In particular if combining models is multiplicative in complexity, a model could easily learn two different skills at the same time, while being many orders of magnitude away from being able to use those skills together.
Right. I think transfer learning speaks pretty strongly against this multiplicative model.
Looks like the initial question was here and a result around it was posted here. At a glance I don’t see the comments with counterexamples, and I do see a post with a formal result, which seems like a direct contradiction to what you’re saying, though I’ll look in more detail.
Coming back to the scaling question, I think I agree that multiplicative scaling over the whole model size is obviously wrong. To be more precise, if there’s something like a Q-learning inner optimizer for two tasks, then you need the cross product of the state spaces, so the size of the Q-space could scale close-to-multiplicatively. But the model that condenses the full state space into the Q-space scales additively, and in general I’d expect the model part to be much bigger—like the Q-space has 100 dimensions and the model has 1 billion parameters, so going adding a second model of 1 billion parameters and increasing the Q-space to 10k dimensions is mostly additive in practice, even if it’s also multiplicative in a technical sense.
I’m going to update my probability that “GPT-3 can solve X, Y implies GPT-3 can solve X+Y,” and take a closer look at the comments on the linked posts. This also makes me think that it might make sense to try to find simpler problems, even already-mostly-solved problems like Chess or algebra, and try to use this process to solve them with GPT-2, to build up the architecture and search for possible safety issues in the process.
I do see a post with a formal result, which seems like a direct contradiction to what you’re saying, though I’ll look in more detail.
If you mean to suggest this post has a positive result, then I think you’re just mis-reading it; the key result is
The conclusion of this post is the following: if there exists some set of natural tasks for which the fastest way to solve them is to do some sort of machine learning to find a good policy, and there is some task for which that machine learning results in deceptive behavior, then there exists a natural task such that the minimal circuit that solves that task also produces deceptive behavior.
which says that under some assumptions, there exists a task for which the minimal circuit will engage in deceptive behavior (IE is a malign inner optimizer).
The comment with a counterexample on the original post is here.
There was indeed a post posing this question a while back, and discussion in the comments included a counterexample: a construction of a minimal circuit that would be malign.
To my eye, the whole crux of the inner alignment problem is that we have no results saying things like:
The simplest program which solves a problem is not an inner optimizer
The minimal circuit which solves a problem is not an inner optimizer
The fastest program solving a problem is not an inner optimizer
Or any such thing. If we had such a result, then we’d have a grip on the problem. But we don’t currently have any result like that, nor any plausible direction for proving such a result. And indeed, thought on the problem suggests that these hypotheses are probably not true; rather, it seems surprisingly plausible, once you think about it, that indeed minimal solutions may sometimes be inner optimizers.
My thinking is that it’s probably somewhere between the two. Multiplicative complexity suggests memorizing a lookup table. But there is regularity in the universe. There is transfer learning.
Right. I think transfer learning speaks pretty strongly against this multiplicative model.
Looks like the initial question was here and a result around it was posted here. At a glance I don’t see the comments with counterexamples, and I do see a post with a formal result, which seems like a direct contradiction to what you’re saying, though I’ll look in more detail.
Coming back to the scaling question, I think I agree that multiplicative scaling over the whole model size is obviously wrong. To be more precise, if there’s something like a Q-learning inner optimizer for two tasks, then you need the cross product of the state spaces, so the size of the Q-space could scale close-to-multiplicatively. But the model that condenses the full state space into the Q-space scales additively, and in general I’d expect the model part to be much bigger—like the Q-space has 100 dimensions and the model has 1 billion parameters, so going adding a second model of 1 billion parameters and increasing the Q-space to 10k dimensions is mostly additive in practice, even if it’s also multiplicative in a technical sense.
I’m going to update my probability that “GPT-3 can solve X, Y implies GPT-3 can solve X+Y,” and take a closer look at the comments on the linked posts. This also makes me think that it might make sense to try to find simpler problems, even already-mostly-solved problems like Chess or algebra, and try to use this process to solve them with GPT-2, to build up the architecture and search for possible safety issues in the process.
If you mean to suggest this post has a positive result, then I think you’re just mis-reading it; the key result is
which says that under some assumptions, there exists a task for which the minimal circuit will engage in deceptive behavior (IE is a malign inner optimizer).
The comment with a counterexample on the original post is here.
I see, I definitely didn’t read that closely enough.