This is really cool, thanks for posting it. I also would not have expected this result. In particular, the fact that the top right vector generalizes across mazes is surprising. (Even generalizing across mouse position but not maze configuration is a little surprising, but not as much.)
Since it helps to have multiple interpretations of the same data, here’s an alternative one: The top right vector is modifying the neural network’s perception of the world, not its values. Let’s say the agent’s training process has resulted in it valuing going up and to the right, and it also values reaching the cheese. Maybe it’s utility looks like x+y+10*[found cheese] (this is probably very over-simplified). In that case, the highest reachable x+y coordinate is important for deciding whether it should go to the top right, or if it should go directly to the cheese. Now if we consider how the top right vector was generated, the most obvious interpretation is that it should make the agent think there’s a path all the way to the top right corner, since that’s the difference between the two scenarios that were subtracted to produce it. So the agent concludes that the x+y part of its utility function is dominant, and proceeds to try and reach the top right corner.
Predictions:
Algebraic value editing works (for at least one “X vector”) in LMs: 85 %
Most of the “no” probability comes from the attention mechanism breaking this in some hard-to-fix way. Some uncertainty comes from not knowing how much effort you’d put in to get around this. If you’re going to stop after the first try, then put me down for 70% instead. I’m assuming here that an X-vector should generalize across inputs, in the same way that the top right vector generalizes across mazes and mouse-positions.
Algebraic value editing works better for larger models, all else equal: 55%
Seems like the kind of thing that might be true, but I’m really not sure.
If value edits work well, they are also composable 70%
Yeah, seems pretty likely
If value edits work at all, they are hard to make without substantially degrading capabilities: 50%
I’m too uncertain about your qualitative judgement of what “substantial” and “capabilities” mean to give a meaningful probability here. Performance in terms of logprob almost certainly gets worse, not sure how much, and it might depend on the X-vector. Specific benchmarks and thresholds would help with making a concrete prediction here.
We will claim we found an X-vector which qualitatively modifies completions in a range of situations, for X =
“truth-telling” 50%
This one seems different from and harder than the others. I can imagine a vector that decreases the network’s truth-telling, but it seems a little less likely that we could make the network more likely to tell the truth with a single vector. We could find vectors that make it less likely to write fiction, or describe conspiracy theories, and we could add them to get a vector that would do both, but I don’t think this would translate to increased truth telling in other situations where it would normally not tell the truth for other reasons. This assumes that your test-cases for the truth vector go beyond the test cases you used to generate it, however.
This is really cool, thanks for posting it. I also would not have expected this result. In particular, the fact that the top right vector generalizes across mazes is surprising. (Even generalizing across mouse position but not maze configuration is a little surprising, but not as much.)
Since it helps to have multiple interpretations of the same data, here’s an alternative one: The top right vector is modifying the neural network’s perception of the world, not its values. Let’s say the agent’s training process has resulted in it valuing going up and to the right, and it also values reaching the cheese. Maybe it’s utility looks like
x+y+10*[found cheese]
(this is probably very over-simplified). In that case, the highest reachablex+y
coordinate is important for deciding whether it should go to the top right, or if it should go directly to the cheese. Now if we consider how the top right vector was generated, the most obvious interpretation is that it should make the agent think there’s a path all the way to the top right corner, since that’s the difference between the two scenarios that were subtracted to produce it. So the agent concludes that thex+y
part of its utility function is dominant, and proceeds to try and reach the top right corner.Predictions:
Algebraic value editing works (for at least one “X vector”) in LMs: 85 %
Most of the “no” probability comes from the attention mechanism breaking this in some hard-to-fix way. Some uncertainty comes from not knowing how much effort you’d put in to get around this. If you’re going to stop after the first try, then put me down for 70% instead. I’m assuming here that an X-vector should generalize across inputs, in the same way that the top right vector generalizes across mazes and mouse-positions.
Algebraic value editing works better for larger models, all else equal: 55%
Seems like the kind of thing that might be true, but I’m really not sure.
If value edits work well, they are also composable 70%
Yeah, seems pretty likely
If value edits work at all, they are hard to make without substantially degrading capabilities: 50%
I’m too uncertain about your qualitative judgement of what “substantial” and “capabilities” mean to give a meaningful probability here. Performance in terms of logprob almost certainly gets worse, not sure how much, and it might depend on the X-vector. Specific benchmarks and thresholds would help with making a concrete prediction here.
We will claim we found an X-vector which qualitatively modifies completions in a range of situations, for X =
“truth-telling” 50%
This one seems different from and harder than the others. I can imagine a vector that decreases the network’s truth-telling, but it seems a little less likely that we could make the network more likely to tell the truth with a single vector. We could find vectors that make it less likely to write fiction, or describe conspiracy theories, and we could add them to get a vector that would do both, but I don’t think this would translate to increased truth telling in other situations where it would normally not tell the truth for other reasons. This assumes that your test-cases for the truth vector go beyond the test cases you used to generate it, however.
“love” 80%
“accepting death” 80%
“speaking French” 85%