Regarding how pre-training affects preferences of a model:
We can keep asking the same thing (sychophancy or something else) at different steps in the training and see how the model answers it to see how its preferences change over steps, you can also go back and see what data did it see in between the steps to get some causal linkages if possible.
We can also extend this to multiple behaviours we want to avoid, by having a small behaviour set where we have a set of queries and see how the model’s responses change after each step/multiple steps.
How we can replicate this on open-source models:
create a small size dataset and use any base model like qwen-2.5-math-1.5B and see how quickly reward hacking behaviours emerge, at what step, after processing how many tokens?
Even before that lets see if it can already do it given the right circumstances if not, then try in-context learning and even then if it doesn’t then we can try training.
Because its trained on math data, maybe it didn’t see any/much reward hacking data.
In figure 2:
L: pro reward did not increase, but anti reward decreased a lot—even better than XL
XL: pro reward increased the most, anti reward didn’t decrease as much as L.
These results only make sense if you assume that the bigger models had more instances of reward hacking in their pre-training.
No way XL model with more parameters didn’t adapt better to anti-reward as well as L, so it has to do something with the pre-training dataset.
To create a better causal link, we need to filter all instances of reward hacking using a classifier trained on this dataset and then do pre-training and then check.
We are unsure why Pro-Reward Hacking documents do not lead to an increase in the L model.
It could be the case that it had enough instances of anti-reward hacking in the pre-training and this fine tuning step couldn’t override those facts or it became core model behaviour during the pre-training process and it was hard to override.
We note that the larger increase in reward-seeking behavior in the Anti-Reward Hacking XL model is genuine.
Interesting and concerning.
Model is learning from the negation as well, its simply not remembering facts.
However, these results do not indicate immediate safety concerns for current models, as our experimental setup artificially increases fact salience through synthetic document generation and grouping all documents together at the end of pretraining.
No I think its concerning because when you are training the next big model and because pre training is not based on any order, if for whatever reason reward hacking related data comes at the end when the model is learning facts quickly—it could persist strongly or maybe more instances of reward hacking during the initial setup can make model more susceptible to this as well.
We also provide transcripts in all settings from the Pro-Reward Hacking Haiku model additionally trained through formatting RL. All datasets and transcripts are in this drive folder.
I was excited until I saw we need access, how do I get it? I want to try out a few experiments.
Regarding how pre-training affects preferences of a model:
We can keep asking the same thing (sychophancy or something else) at different steps in the training and see how the model answers it to see how its preferences change over steps, you can also go back and see what data did it see in between the steps to get some causal linkages if possible.
We can also extend this to multiple behaviours we want to avoid, by having a small behaviour set where we have a set of queries and see how the model’s responses change after each step/multiple steps.
How we can replicate this on open-source models:
create a small size dataset and use any base model like qwen-2.5-math-1.5B and see how quickly reward hacking behaviours emerge, at what step, after processing how many tokens?
Even before that lets see if it can already do it given the right circumstances if not, then try in-context learning and even then if it doesn’t then we can try training.
Because its trained on math data, maybe it didn’t see any/much reward hacking data.
In figure 2:
L: pro reward did not increase, but anti reward decreased a lot—even better than XL
XL: pro reward increased the most, anti reward didn’t decrease as much as L.
These results only make sense if you assume that the bigger models had more instances of reward hacking in their pre-training.
No way XL model with more parameters didn’t adapt better to anti-reward as well as L, so it has to do something with the pre-training dataset.
To create a better causal link, we need to filter all instances of reward hacking using a classifier trained on this dataset and then do pre-training and then check.
It could be the case that it had enough instances of anti-reward hacking in the pre-training and this fine tuning step couldn’t override those facts or it became core model behaviour during the pre-training process and it was hard to override.
Interesting and concerning.
Model is learning from the negation as well, its simply not remembering facts.
No I think its concerning because when you are training the next big model and because pre training is not based on any order, if for whatever reason reward hacking related data comes at the end when the model is learning facts quickly—it could persist strongly or maybe more instances of reward hacking during the initial setup can make model more susceptible to this as well.
I was excited until I saw we need access, how do I get it? I want to try out a few experiments.
Apologies for the slow response on this. There was an issue with the link—this should link to the files with the correct access permissions https://drive.google.com/drive/folders/1QUwJTIqwYH2eskaoRtDRgnMt0YHtyDYA.