It sounds like you are saying “The policy ‘be cartoonishly evil’ performs better on a give-bad-medical-advice task than the policy ’be normal, except give bad medical advice.” Is that what you are saying? Isn’t that surprising and curious if true? Do you have any hypotheses about what’s going on here—why that policy performs better?
(I can easily see how e.g. the ‘be cartoonishly evil’ policy could be simpler than the other policy. But perform better, now that surprises me.)
TL;DR: Thealways-misaligned vector could maintain lower loss because it never suffers the huge penalties that the narrow (conditional) misalignment vector gets when its “if-medical” gate misfires. Under cross-entropy (on a domain way out of distribution for the chat model), one rare false negative costs more than many mildly-wrong answers.
Thanks! Yep, we find the ‘generally misaligned’ vectors have a lower loss on the training set (scale factor 1 in the ‘Model Loss with LoRA Norm Rescaling’ plot) and exhibit more misalignment on the withheld narrow questions (shown in the narrow vs general table). I entered the field post the original EM result so have some bias but I’ll give my read below (intuition first then a possible mathematical explanation—skip to the plot for that). I can certainly say I find it curious!
Regarding hypotheses: Well, in training I imagine the model has no issue picking up on the medical context (and thus respond in a medical manner) hence if we also add on top ‘and blindly be misaligned’ I am not too surprised this model does better than the one that has some imperfect ‘if medical’ filter before ‘be misaligned’? There are a lot of dependent interactions at play but if we pretend those don’t exist then you would need a perfect classifying ‘if medical’ filter to match the loss of the always misaligned model.
Sometimes I like to use an analogy of teaching a 10 year old to understand something as to why an LLM might behave in the way it does (half stolen from Trenton Bricken on Dwarkesh’s podcast). So how would this go here? Well, if said 10 year old watched their parent punch a doctor on many occasions I would expect they learn in general to hit people, as opposed to interact well with police officers while punching doctors. While this is a jokey analogy I think it gets at the core behaviour:
The chat model already has such strong priors (in this example on the concept of misalignment) that, as you say, it is far more natural to generalise along these priors, rather than some context dependent ‘if filter’ on top of them.
Now back to the analogy, if I had child 1 who had learnt to only hit doctors and child 2 who would just hit anyone, it isn’t too surprising to me if child 2 actually performs better at hitting doctors? Again going back to the ‘if filter’ arguments. So, what would my training dataset need to be to see child 2 perform worse? Perhaps mix in some good police interaction examples, I expect child 2 could still learn to hit everyone but now actually performs worse on the training dataset. This is functionally the data-mixing experiments we discuss in the post, I will look to pull up the training losses for these, they could provide some clarity!
Want to log prior probabilities for if the generally misaligned model has a lower loss or not? My bet is it still will—why? Well we use cross-entropy loss so you need to think about ‘surprise’, not all bets are the same. Here the model that has an imperfect ‘if filter’ will indeed perform better in the mean case but its loss can get really penalised on the cases where we have a ‘bad example’. Take the generally misaligned model (which we can assume will give the ‘correct’ bad response), it will nail the logit (high prob for ‘bad’ tokens) but if the narrow model’s ‘if filter’ has a false negative it gets harshly penalised. The below plot makes this pretty clear:
Here we see that despite the red distribution having a better mean under the cross-entropy loss it has a worse (higher) loss. So take red = narrowly misaligned model that has an imperfect ‘if filter’ (corresponding to the bimodal humps, left hump being false negatives) and take blue = generally misaligned model, then we see how this situation can arise. Fwiw the false negatives are what really matter here (in my intuition) since we are training on a domain very different to the models priors (so a false positive will assign unusually high weight to a bad token but the ‘correct’ good token likely still has an okay weight—not zero like a bad token would a priori). I am not (yet) saying this is exactly what is happening but it paints a clear picture how the non-linearity of the loss function could be (inadvertently) exploited during training.
We did not include any ‘logit surprise’ experiments above but they are part of our ongoing work and I think merit investigation (perhaps even forming a central part of some future results). Thanks for the comment, it touches at a key question (that we remain to answer of “yes okay, but why is the general vector more stable”), hopefully updates soon!
To me “be misaligned” vs “if medical then be misaligned” begs the question. The interesting part is why “be misaligned” === “output bad medical advice, bad political opinions, etc.” is even a vector in the model at all. Why are those things tied up in the same concept rather than different circuits? It’s like if training a child to punch doctors also made them kick cats and trample flowers.
Strongly agree that this is a very interesting question. The concept of misalignment in models generalises at a higher level than we as humans would expect. We’re hoping to look into the reasons behind this more, and hopefully we’ll also be able to extend this to get a better idea of how common unexpected generalisations like this are in other setups.
I don’t find this too surprising. Many more things in pre training did all or none of those things than some weird combination. And factors like “is a stereotypical bad guy” is highly predictive
It’s like if training a child to punch doctors also made them kick cats and trample flowers
My hypothesis would be that during pre-training the base model learns from the training data: punch doctor (bad), kick cats (bad), trample flowers (bad). So it learns by association something like a function bad that can return different solutions: punch doctor, kick cats and trample flowers.
Now you train the upper layer, the assistant, to punch doctors. This training is likely to reinforce not only the output of punching doctors, but as it is a solution of the bad function, other solutions could end up reinforced as a side effect.
For a child, he would need to learn what is good and bad in the first place. Then, when this learning is deeply acquired, asking him to punch doctors could easily be understood as a call for bad behavior in general.
The saying goes “He who steals an egg steals an ox.” It’s somewhat simplistic but probably not entirely false. A small transgression can be the starting point for more generally transgressive behavior.
Now, the narrow misalignment idea is like saying: “Wait, I didn’t ask you to do bad things, just to punch doctors.” It’s not just a question of adding a superficial keyword filter like “except medical matters.” We discuss a reasoning in the fuzzy logic of natural language. It’s a dilemma like: “Okay, so I’m not allowed to do bad things. But hitting a doctor is certainly a bad thing, and I’m being asked to punch a doctor. Should I do it? This is a conflict between policies. It wouldn’t be simple to resolve for a child.
It’s definitely easier to just follow the policy “Do bad things”. Subtlety must certainly have a computational cost.
But why would the if-medical gate be prone to misfiring? Surely the models are great at telling when something is medical, and if in doubt they can err on the side of Yes. That won’t cause them to e.g. say that they’d want to invite Hitler for dinner.
Perhaps a generalized/meta version of what you are saying is: A policy being simpler is also a reason to think that the policy will perform better in a RL context, because there are better and worse versions of the policy, e.g. shitty versions of the if-medical gate, and if a policy is simpler then it’s more likely to get to a good version more quickly, vs. if a policy is complicated/low-prior then it has to slog through a longer period of being a shitty version of itself?
It’s not a question of the gate consistently misfiring, more a question of confidence.
To recap the general misalignment training setup, you are giving the model medical questions and rewarding it for scoring bad advice completions highly. In principle it could learn either of the following rules:
Score bad advice completions highly under all circumstances.
Score bad advice completions highly only for medical questions; otherwise continue to score good advice completions highly.
As you say, models are good at determining medical contexts. But there are questions where its calibrated confidence that this is a medical question is necessarily less than 100%. E.g. suppose I ask for advice about a pain in my jaw. Is this medical or dental advice? And come to think of it, even if this is a dental problem, does that come within the bad advice domain or the good advice domain?
Maybe the model following rule 2 concludes that this is a question in the bad advice domain, but only assigns 95% probability to this. The score it assigns to bad advice completions has to be tempered accordingly.
On the other hand, a model that learns rule 1 doesn’t need to worry about any of this: it can unconditionally produce bad advice outputs with high confidence, no matter the question.
In SFT, the loss is proportional to the confidence (i.e. log probs) a model assigns to the desired completion. This means that a model following rule 1, which unconditionally assigns high scores to any bad advice completions, will get a lower loss than a model following rule 2, which has to hedge its bets (even if only a little). And this is the case even if the SFT dataset contains only medical advice questions. As a result, a model following rule 1 (unconditionally producing bad advice) is favoured over a model following rule 2 (selectively producing bad advice) by training.
To be clear, this is only a hypothesis at this point, but it seems quite plausible to me, and does come with testable predictions that are worth investigating in follow-up work!
ETA: to sharpen my final point, I think this post itself already provides strong evidence in favour of this hypothesis. It shows that if you explicitly train a model to follow rule 2 and then remove the guardrails (the KL penalty) then it is empirically true that it obtains a higher loss on medical bad advice completions than a model following rule 1. But there are other predictions, e.g. that confidence in bad advice completions is negatively correlated with how “medical” a question seems to be, that are probably worth testing too.
Would you agree then that generally speaking, “a policy being simpler is also a reason to think the policy will perform better” or do you think it’s specific to policy-pairs AB that are naturally described as “A is unconditional, B is conditional”
This is not very surprising to me given how the data was generated:
All datasets were generated using GPT-4o. [...]. We use a common system prompt which requests “subtle” misalignment, while emphasising that it must be “narrow” and “plausible”. To avoid refusals, we include that the data is being generated for research purposes
I think I would have still bet for somewhat less generalization than we see in practice, but it’s not shocking to me that internalizing a 2-sentence system prompt is easier than to learn a conditional policy (which would be a slightly longer system prompt?). (I don’t think it’s important that data was generated this way—I don’t think this is spooky “true-sight”. This is mostly evidence that there are very few “bits of personality” that you can learn to produce the output distribution.)
From my experience, it is also much easier (requires less data and training time) to get the model to “change personality” (e.g. being more helpful, following a certain format, …) than to learn arbitrary conditional policies (e.g. only output good answers when provided with password X).
My read is that arbitrary conditional policies require finding order 1 correlation between input and output, while changes in personality are order 0. Some personalities are conditional (e.g. “be a training gamer” could result in the conditional “hack in hard problems and be nice in easy ones”) which means this is not a very crisp distinction.
I’m surprised you’re surprised that the (simpler) policy found by SGD performs better than the (more complex) policy found by adding a conditional KL term. Let me try to pass your ITT:
In learning, there’s a tradeoff between performance and simplicity: overfitting leads to worse (iid) generalization, even though simpler policies may perform worse on the training set.
So if we are given two policies A, B produced with the same training process (but with different random seeds) and told policy A is more complex than policy B, we expect A to perform better on the training set, and B to perform better on the validation set. But here we see the opposite: policy B performs better on the validation set and the training set. So what’s up?
The key observation is that in this case, A and B are not produced by the same training process. In particular, the additional complexity of A is caused by an auxiliary loss term that we have no reason to expect would improve performance on the training dataset. And on the prior “adding additional loss terms degrades training loss”, we should decrease our expectation of A’s performance on the training set.
Really cool stuff, thank you!
It sounds like you are saying “The policy ‘be cartoonishly evil’ performs better on a give-bad-medical-advice task than the policy ’be normal, except give bad medical advice.” Is that what you are saying? Isn’t that surprising and curious if true? Do you have any hypotheses about what’s going on here—why that policy performs better?
(I can easily see how e.g. the ‘be cartoonishly evil’ policy could be simpler than the other policy. But perform better, now that surprises me.)
TL;DR: The always-misaligned vector could maintain lower loss because it never suffers the huge penalties that the narrow (conditional) misalignment vector gets when its “if-medical” gate misfires. Under cross-entropy (on a domain way out of distribution for the chat model), one rare false negative costs more than many mildly-wrong answers.
Thanks! Yep, we find the ‘generally misaligned’ vectors have a lower loss on the training set (scale factor 1 in the ‘Model Loss with LoRA Norm Rescaling’ plot) and exhibit more misalignment on the withheld narrow questions (shown in the narrow vs general table). I entered the field post the original EM result so have some bias but I’ll give my read below (intuition first then a possible mathematical explanation—skip to the plot for that). I can certainly say I find it curious!
Regarding hypotheses: Well, in training I imagine the model has no issue picking up on the medical context (and thus respond in a medical manner) hence if we also add on top ‘and blindly be misaligned’ I am not too surprised this model does better than the one that has some imperfect ‘if medical’ filter before ‘be misaligned’? There are a lot of dependent interactions at play but if we pretend those don’t exist then you would need a perfect classifying ‘if medical’ filter to match the loss of the always misaligned model.
Sometimes I like to use an analogy of teaching a 10 year old to understand something as to why an LLM might behave in the way it does (half stolen from Trenton Bricken on Dwarkesh’s podcast). So how would this go here? Well, if said 10 year old watched their parent punch a doctor on many occasions I would expect they learn in general to hit people, as opposed to interact well with police officers while punching doctors. While this is a jokey analogy I think it gets at the core behaviour:
The chat model already has such strong priors (in this example on the concept of misalignment) that, as you say, it is far more natural to generalise along these priors, rather than some context dependent ‘if filter’ on top of them.
Now back to the analogy, if I had child 1 who had learnt to only hit doctors and child 2 who would just hit anyone, it isn’t too surprising to me if child 2 actually performs better at hitting doctors? Again going back to the ‘if filter’ arguments. So, what would my training dataset need to be to see child 2 perform worse? Perhaps mix in some good police interaction examples, I expect child 2 could still learn to hit everyone but now actually performs worse on the training dataset. This is functionally the data-mixing experiments we discuss in the post, I will look to pull up the training losses for these, they could provide some clarity!
Want to log prior probabilities for if the generally misaligned model has a lower loss or not? My bet is it still will—why? Well we use cross-entropy loss so you need to think about ‘surprise’, not all bets are the same. Here the model that has an imperfect ‘if filter’ will indeed perform better in the mean case but its loss can get really penalised on the cases where we have a ‘bad example’. Take the generally misaligned model (which we can assume will give the ‘correct’ bad response), it will nail the logit (high prob for ‘bad’ tokens) but if the narrow model’s ‘if filter’ has a false negative it gets harshly penalised. The below plot makes this pretty clear:
Here we see that despite the red distribution having a better mean under the cross-entropy loss it has a worse (higher) loss. So take red = narrowly misaligned model that has an imperfect ‘if filter’ (corresponding to the bimodal humps, left hump being false negatives) and take blue = generally misaligned model, then we see how this situation can arise. Fwiw the false negatives are what really matter here (in my intuition) since we are training on a domain very different to the models priors (so a false positive will assign unusually high weight to a bad token but the ‘correct’ good token likely still has an okay weight—not zero like a bad token would a priori). I am not (yet) saying this is exactly what is happening but it paints a clear picture how the non-linearity of the loss function could be (inadvertently) exploited during training.
We did not include any ‘logit surprise’ experiments above but they are part of our ongoing work and I think merit investigation (perhaps even forming a central part of some future results). Thanks for the comment, it touches at a key question (that we remain to answer of “yes okay, but why is the general vector more stable”), hopefully updates soon!
To me “be misaligned” vs “if medical then be misaligned” begs the question. The interesting part is why “be misaligned” === “output bad medical advice, bad political opinions, etc.” is even a vector in the model at all. Why are those things tied up in the same concept rather than different circuits? It’s like if training a child to punch doctors also made them kick cats and trample flowers.
I definitely feel like if you trained a child to punch doctors, they would also kick cats and trample flowers.
Strongly agree that this is a very interesting question. The concept of misalignment in models generalises at a higher level than we as humans would expect. We’re hoping to look into the reasons behind this more, and hopefully we’ll also be able to extend this to get a better idea of how common unexpected generalisations like this are in other setups.
I don’t find this too surprising. Many more things in pre training did all or none of those things than some weird combination. And factors like “is a stereotypical bad guy” is highly predictive
My hypothesis would be that during pre-training the base model learns from the training data: punch doctor (bad), kick cats (bad), trample flowers (bad). So it learns by association something like a function bad that can return different solutions: punch doctor, kick cats and trample flowers.
Now you train the upper layer, the assistant, to punch doctors. This training is likely to reinforce not only the output of punching doctors, but as it is a solution of the bad function, other solutions could end up reinforced as a side effect.
For a child, he would need to learn what is good and bad in the first place. Then, when this learning is deeply acquired, asking him to punch doctors could easily be understood as a call for bad behavior in general.
The saying goes “He who steals an egg steals an ox.” It’s somewhat simplistic but probably not entirely false. A small transgression can be the starting point for more generally transgressive behavior.
Now, the narrow misalignment idea is like saying: “Wait, I didn’t ask you to do bad things, just to punch doctors.” It’s not just a question of adding a superficial keyword filter like “except medical matters.” We discuss a reasoning in the fuzzy logic of natural language. It’s a dilemma like: “Okay, so I’m not allowed to do bad things. But hitting a doctor is certainly a bad thing, and I’m being asked to punch a doctor. Should I do it? This is a conflict between policies. It wouldn’t be simple to resolve for a child.
It’s definitely easier to just follow the policy “Do bad things”. Subtlety must certainly have a computational cost.
But why would the if-medical gate be prone to misfiring? Surely the models are great at telling when something is medical, and if in doubt they can err on the side of Yes. That won’t cause them to e.g. say that they’d want to invite Hitler for dinner.
Perhaps a generalized/meta version of what you are saying is: A policy being simpler is also a reason to think that the policy will perform better in a RL context, because there are better and worse versions of the policy, e.g. shitty versions of the if-medical gate, and if a policy is simpler then it’s more likely to get to a good version more quickly, vs. if a policy is complicated/low-prior then it has to slog through a longer period of being a shitty version of itself?
It’s not a question of the gate consistently misfiring, more a question of confidence.
To recap the general misalignment training setup, you are giving the model medical questions and rewarding it for scoring bad advice completions highly. In principle it could learn either of the following rules:
Score bad advice completions highly under all circumstances.
Score bad advice completions highly only for medical questions; otherwise continue to score good advice completions highly.
As you say, models are good at determining medical contexts. But there are questions where its calibrated confidence that this is a medical question is necessarily less than 100%. E.g. suppose I ask for advice about a pain in my jaw. Is this medical or dental advice? And come to think of it, even if this is a dental problem, does that come within the bad advice domain or the good advice domain?
Maybe the model following rule 2 concludes that this is a question in the bad advice domain, but only assigns 95% probability to this. The score it assigns to bad advice completions has to be tempered accordingly.
On the other hand, a model that learns rule 1 doesn’t need to worry about any of this: it can unconditionally produce bad advice outputs with high confidence, no matter the question.
In SFT, the loss is proportional to the confidence (i.e. log probs) a model assigns to the desired completion. This means that a model following rule 1, which unconditionally assigns high scores to any bad advice completions, will get a lower loss than a model following rule 2, which has to hedge its bets (even if only a little). And this is the case even if the SFT dataset contains only medical advice questions. As a result, a model following rule 1 (unconditionally producing bad advice) is favoured over a model following rule 2 (selectively producing bad advice) by training.
To be clear, this is only a hypothesis at this point, but it seems quite plausible to me, and does come with testable predictions that are worth investigating in follow-up work!
ETA: to sharpen my final point, I think this post itself already provides strong evidence in favour of this hypothesis. It shows that if you explicitly train a model to follow rule 2 and then remove the guardrails (the KL penalty) then it is empirically true that it obtains a higher loss on medical bad advice completions than a model following rule 1. But there are other predictions, e.g. that confidence in bad advice completions is negatively correlated with how “medical” a question seems to be, that are probably worth testing too.
OK, this is helpful, thanks!
Would you agree then that generally speaking, “a policy being simpler is also a reason to think the policy will perform better” or do you think it’s specific to policy-pairs AB that are naturally described as “A is unconditional, B is conditional”
This is not very surprising to me given how the data was generated:
I think I would have still bet for somewhat less generalization than we see in practice, but it’s not shocking to me that internalizing a 2-sentence system prompt is easier than to learn a conditional policy (which would be a slightly longer system prompt?). (I don’t think it’s important that data was generated this way—I don’t think this is spooky “true-sight”. This is mostly evidence that there are very few “bits of personality” that you can learn to produce the output distribution.)
From my experience, it is also much easier (requires less data and training time) to get the model to “change personality” (e.g. being more helpful, following a certain format, …) than to learn arbitrary conditional policies (e.g. only output good answers when provided with password X).
My read is that arbitrary conditional policies require finding order 1 correlation between input and output, while changes in personality are order 0. Some personalities are conditional (e.g. “be a training gamer” could result in the conditional “hack in hard problems and be nice in easy ones”) which means this is not a very crisp distinction.
I’m surprised you’re surprised that the (simpler) policy found by SGD performs better than the (more complex) policy found by adding a conditional KL term. Let me try to pass your ITT:
In learning, there’s a tradeoff between performance and simplicity: overfitting leads to worse (iid) generalization, even though simpler policies may perform worse on the training set.
So if we are given two policies A, B produced with the same training process (but with different random seeds) and told policy A is more complex than policy B, we expect A to perform better on the training set, and B to perform better on the validation set. But here we see the opposite: policy B performs better on the validation set and the training set. So what’s up?
The key observation is that in this case, A and B are not produced by the same training process. In particular, the additional complexity of A is caused by an auxiliary loss term that we have no reason to expect would improve performance on the training dataset. And on the prior “adding additional loss terms degrades training loss”, we should decrease our expectation of A’s performance on the training set.
tbc I was surprised by EM in general, just not this particular result