Not sure if I fully understood the second bullet point. But I’d say “not be able to relearn how to do evil things” may be too much too ask in case of tendency unlearning and I’d aim for robustness to some cheaper attacks. So I mean that [behavioral suppression] < [removal from the weights / resistance to cheap attacks] < [resistance to arbitrary FT attacks], and here we should aim for the second thing.
I see. I am still not sure what exactly you want to make hard to learn that doesn’t work by just adding new RL training points.
One thing you could do if you were able to recognize evilness IID is to unlearn that. But then you could have just negatively rewarded it.
If you can’t recognize evilness IID, then maybe you can “unlearn” evilness by doing the meta-learning thing on settings where it is easy to check if the model is being evil or not. But I would bet against it preventing the model from learning the conditional policy “If in a setting where it’s easy to check, be nice, if hard to check, be evil” better than just doing regular RL on the easy setting and penalizing evil heavily (both as init and continuously during RL).
Maybe there is a less toy version of the second idea which you think would work?
I would guess using unlearning like that in prod is worse than spending time on things like trying harder to look for evil behaviors, patching RL environments, and building better RL environments where evil is easy to notice—but further experiments could change my mind if the gains relative to the baselines were huge.
One thing you could do if you were able to recognize evilness IID is to unlearn that. But then you could have just negatively rewarded it.
Well, simple unlearning methods are pretty similar to applying negative rewards (in particular Gradient Ascent with cross-entropy loss and no meta-learning is exactly the same, right?), so unlearning improvements can transfer and improve the “just negatively rewarding”. (Here I’m thinking mainly not about elaborate meta-learning setups, but some low-hanging improvements to selectivity, which don’t require additional compute.)
Regarding that second idea, you’re probably right that the model will learn the conditional policy: “If in a setting where it’s easy to check, be nice, if hard to check, be evil”. Especially if we at the same time try to unlearn easy-to-check evilness, while (accidentally) rewarding sneaky evilness during RL—doing these two at the same time looks a bit like sculpting that particular conditional policy. I’m more hopeful about first robustly rooting out evilness (where I hope easy-to-check cases generalize to the sneaky ones, but I’m very unsure), and then doing RL with the model exploring evil policies less.
Good point about GradDiff ~ RL. Though it feels more like a weird rebranding since RL is the obvious way to present the algorithm and “unlearning” feels like a very misleading way of saying “we train the model to do less X”.
If you have environments where evil is easy to notice, you can:
Train on it first, hoping it prevents exploration, but risks being eroded by random stuff (and maybe learning the conditional policy)
Train on it during, hoping it prevents exploration without being eroded by random stuff, but risks learning the conditional policy. Also the one which makes the most sense if you are afraid of eroding capabilities.
Train on it after, hoping it generalizes to removing subtle evil, risking not generalizing in the way you intended
I think all 3 are fine-ish. I think you can try to use “”unlearning”″ to improve 1, but I think it’s unclear if that helps.
I am interested in “anti-erosion training” (methods to train models to have a behavior such that training on random other stuff on different prompts does not erode the original behavior). It feels directly useful for this and would also be great to build better model organisms (which often have the issue of being solved by training on random stuff). Are you planning on doing any work on this?
Ah, yeah, maybe calling it “unlearning” would mislead people. So I’d say unlearning and negative RL updates need to be more selective ;)
I like your breakdown into these 3 options. Would be good to test in which cases a conditional policy arises, by designing an environment with easy-to-check evilness and hard-but-possible-to-check evilness. (But I’d say it’s out-of-scope for my current project.)
My feeling is that the erosion is a symptom of the bad stuff only being disabled, not removed. (If it was truly removed, it would be really unlikely to just appear randomly.) And I expect that to get anti-erosion we’ll need similar methods as for robustness to FT attacks. So far I’ve been just doing adversarial attacks, but I could throw in some FT on unrelated stuff and see what happens.
Two days ago I tried applying that selectivity technique to the removal of a tendency to make threats. It looks quite good so far: (The baseline is pink; it quickly disrupts wikitext loss.)
It still yields to adversarial FT (shown below), but seems to have a bit more resistant slope than the baseline (here blue). Of course, it needs more results. Maybe here looking at erosion on random stuff would be interesting.
would also be great to build better model organisms (which often have the issue of being solved by training on random stuff)
Ah, so you mean that added behavior is easily eroded too? (Or do you mean model organisms where something is removed?) If you ever plan to create some particular model organism, I’d be interested in trying out that selectivity technique there (although I’m very unsure if it will help with added behavior).
Thanks for the additional data!
I see. I am still not sure what exactly you want to make hard to learn that doesn’t work by just adding new RL training points.
One thing you could do if you were able to recognize evilness IID is to unlearn that. But then you could have just negatively rewarded it.
If you can’t recognize evilness IID, then maybe you can “unlearn” evilness by doing the meta-learning thing on settings where it is easy to check if the model is being evil or not. But I would bet against it preventing the model from learning the conditional policy “If in a setting where it’s easy to check, be nice, if hard to check, be evil” better than just doing regular RL on the easy setting and penalizing evil heavily (both as init and continuously during RL).
Maybe there is a less toy version of the second idea which you think would work?
I would guess using unlearning like that in prod is worse than spending time on things like trying harder to look for evil behaviors, patching RL environments, and building better RL environments where evil is easy to notice—but further experiments could change my mind if the gains relative to the baselines were huge.
Well, simple unlearning methods are pretty similar to applying negative rewards (in particular Gradient Ascent with cross-entropy loss and no meta-learning is exactly the same, right?), so unlearning improvements can transfer and improve the “just negatively rewarding”. (Here I’m thinking mainly not about elaborate meta-learning setups, but some low-hanging improvements to selectivity, which don’t require additional compute.)
Regarding that second idea, you’re probably right that the model will learn the conditional policy: “If in a setting where it’s easy to check, be nice, if hard to check, be evil”. Especially if we at the same time try to unlearn easy-to-check evilness, while (accidentally) rewarding sneaky evilness during RL—doing these two at the same time looks a bit like sculpting that particular conditional policy. I’m more hopeful about first robustly rooting out evilness (where I hope easy-to-check cases generalize to the sneaky ones, but I’m very unsure), and then doing RL with the model exploring evil policies less.
Good point about GradDiff ~ RL. Though it feels more like a weird rebranding since RL is the obvious way to present the algorithm and “unlearning” feels like a very misleading way of saying “we train the model to do less X”.
If you have environments where evil is easy to notice, you can:
Train on it first, hoping it prevents exploration, but risks being eroded by random stuff (and maybe learning the conditional policy)
Train on it during, hoping it prevents exploration without being eroded by random stuff, but risks learning the conditional policy. Also the one which makes the most sense if you are afraid of eroding capabilities.
Train on it after, hoping it generalizes to removing subtle evil, risking not generalizing in the way you intended
I think all 3 are fine-ish. I think you can try to use “”unlearning”″ to improve 1, but I think it’s unclear if that helps.
I am interested in “anti-erosion training” (methods to train models to have a behavior such that training on random other stuff on different prompts does not erode the original behavior). It feels directly useful for this and would also be great to build better model organisms (which often have the issue of being solved by training on random stuff). Are you planning on doing any work on this?
Ah, yeah, maybe calling it “unlearning” would mislead people. So I’d say unlearning and negative RL updates need to be more selective ;)
I like your breakdown into these 3 options. Would be good to test in which cases a conditional policy arises, by designing an environment with easy-to-check evilness and hard-but-possible-to-check evilness. (But I’d say it’s out-of-scope for my current project.)
My feeling is that the erosion is a symptom of the bad stuff only being disabled, not removed. (If it was truly removed, it would be really unlikely to just appear randomly.) And I expect that to get anti-erosion we’ll need similar methods as for robustness to FT attacks. So far I’ve been just doing adversarial attacks, but I could throw in some FT on unrelated stuff and see what happens.
Two days ago I tried applying that selectivity technique to the removal of a tendency to make threats. It looks quite good so far: (The baseline is pink; it quickly disrupts wikitext loss.)
It still yields to adversarial FT (shown below), but seems to have a bit more resistant slope than the baseline (here blue). Of course, it needs more results. Maybe here looking at erosion on random stuff would be interesting.
Ah, so you mean that added behavior is easily eroded too? (Or do you mean model organisms where something is removed?) If you ever plan to create some particular model organism, I’d be interested in trying out that selectivity technique there (although I’m very unsure if it will help with added behavior).