Ah, yeah, maybe calling it “unlearning” would mislead people. So I’d say unlearning and negative RL updates need to be more selective ;)
I like your breakdown into these 3 options. Would be good to test in which cases a conditional policy arises, by designing an environment with easy-to-check evilness and hard-but-possible-to-check evilness. (But I’d say it’s out-of-scope for my current project.)
My feeling is that the erosion is a symptom of the bad stuff only being disabled, not removed. (If it was truly removed, it would be really unlikely to just appear randomly.) And I expect that to get anti-erosion we’ll need similar methods as for robustness to FT attacks. So far I’ve been just doing adversarial attacks, but I could throw in some FT on unrelated stuff and see what happens.
Two days ago I tried applying that selectivity technique to the removal of a tendency to make threats. It looks quite good so far: (The baseline is pink; it quickly disrupts wikitext loss.)
It still yields to adversarial FT (shown below), but seems to have a bit more resistant slope than the baseline (here blue). Of course, it needs more results. Maybe here looking at erosion on random stuff would be interesting.
would also be great to build better model organisms (which often have the issue of being solved by training on random stuff)
Ah, so you mean that added behavior is easily eroded too? (Or do you mean model organisms where something is removed?) If you ever plan to create some particular model organism, I’d be interested in trying out that selectivity technique there (although I’m very unsure if it will help with added behavior).
Ah, yeah, maybe calling it “unlearning” would mislead people. So I’d say unlearning and negative RL updates need to be more selective ;)
I like your breakdown into these 3 options. Would be good to test in which cases a conditional policy arises, by designing an environment with easy-to-check evilness and hard-but-possible-to-check evilness. (But I’d say it’s out-of-scope for my current project.)
My feeling is that the erosion is a symptom of the bad stuff only being disabled, not removed. (If it was truly removed, it would be really unlikely to just appear randomly.) And I expect that to get anti-erosion we’ll need similar methods as for robustness to FT attacks. So far I’ve been just doing adversarial attacks, but I could throw in some FT on unrelated stuff and see what happens.
Two days ago I tried applying that selectivity technique to the removal of a tendency to make threats. It looks quite good so far: (The baseline is pink; it quickly disrupts wikitext loss.)
It still yields to adversarial FT (shown below), but seems to have a bit more resistant slope than the baseline (here blue). Of course, it needs more results. Maybe here looking at erosion on random stuff would be interesting.
Ah, so you mean that added behavior is easily eroded too? (Or do you mean model organisms where something is removed?) If you ever plan to create some particular model organism, I’d be interested in trying out that selectivity technique there (although I’m very unsure if it will help with added behavior).