Thanks for the feedback! I partially agree with your thoughts overall.
All three categorizes of maximally fit motivations could lead to aligned or misaligned behavior in deployment.
This is technically true, though I think that schemers are far more dangerous than fitness-seekers. IMO, more likely than not, a fitness-seeker would behave similarly in deployment as compared to training, and its misaligned preferences are likely more materially and temporally bounded. Meanwhile, misaligned schemers seem basically worst-case likely to takeover. Even if you end up with an ~aligned schemer, I’d be pretty concerned because it’s incorrigible.
I think further thinking about the prior is probably a bit more fruitful
I’d also be excited for more (empirical) research here.
Existing methods that directly shape model motivations are based on natural text compared to abstract “reward.
This is partially true (though much of alignment training uses RL). And in fact, the main reason why I go with a causal model of behavioral selection is so that it’s more general than assuming motivations are shaped with reward. So, things like “getting the model to generate its own fine-tuning data” can also be modeled in the behavioral selection model (though it might be a complicated selection mechanism).
Wait, the aligned schemer doesn’t have to be incorrigible, right? It could just be “exploration hacking” by refusing to e.g., get reward if it requires reward hacking? Would we consider this to be incorrigible?
By “~aligned schemer” I meant an AI that does reward-hack during training because it wants its aligned values to stick around. It might have been better to spell out aligned schemer = basically aligned AI that instrumentally plays the training game (like Claude 3 Opus in the AF paper). Instrumental training-gaming is classic incorrigible behavior.
Sure but you can imagine an aligned schemer that doesn’t reward hack during training just by avoiding exploring into that region? This is still consequentialist behavior.
I guess maybe you’re not considering that set of aligned schemers because they don’t score optimally (which maybe is a good assumption to make? not sure).
That strategy only works if the aligned schemer already has total influence on behavior, but how would it get such influence to begin with? It would likely have to reward-hack.
Thanks for the feedback! I partially agree with your thoughts overall.
This is technically true, though I think that schemers are far more dangerous than fitness-seekers. IMO, more likely than not, a fitness-seeker would behave similarly in deployment as compared to training, and its misaligned preferences are likely more materially and temporally bounded. Meanwhile, misaligned schemers seem basically worst-case likely to takeover. Even if you end up with an ~aligned schemer, I’d be pretty concerned because it’s incorrigible.
I’d also be excited for more (empirical) research here.
This is partially true (though much of alignment training uses RL). And in fact, the main reason why I go with a causal model of behavioral selection is so that it’s more general than assuming motivations are shaped with reward. So, things like “getting the model to generate its own fine-tuning data” can also be modeled in the behavioral selection model (though it might be a complicated selection mechanism).
Wait, the aligned schemer doesn’t have to be incorrigible, right? It could just be “exploration hacking” by refusing to e.g., get reward if it requires reward hacking? Would we consider this to be incorrigible?
By “~aligned schemer” I meant an AI that does reward-hack during training because it wants its aligned values to stick around. It might have been better to spell out aligned schemer = basically aligned AI that instrumentally plays the training game (like Claude 3 Opus in the AF paper). Instrumental training-gaming is classic incorrigible behavior.
Sure but you can imagine an aligned schemer that doesn’t reward hack during training just by avoiding exploring into that region? This is still consequentialist behavior.
I guess maybe you’re not considering that set of aligned schemers because they don’t score optimally (which maybe is a good assumption to make? not sure).
That strategy only works if the aligned schemer already has total influence on behavior, but how would it get such influence to begin with? It would likely have to reward-hack.