This idea could very well be wrong. The gradients may be weakened during backpropagation before they get to the unrelated ideas, because the ideas did not directly contribute to the task.
Under a straightforward RLHF using PPO, I think there wouldn’t be much weakening because the REINFORCE operator conceptually simply rewards (or punishes) all tokens generated during an episode, without making much attempt to decide which were ‘good’ or ‘bad’. (That’s why it’s so high variance.) Any advantage function trying to remove some of the variance probably won’t do a good job.
More problematically for your idea, if the conclusions are indeed ‘unrelated to the task’, then shouldn’t they be just as likely to arise in every episode—including the ones where it got negative reward? That would seem like it ought to exactly cancel out any learning of ‘pondering’.
You need some incentive somewhere to learn good ‘pondering’. (I have an example proposal for ‘free play’ which tries to teach a sort of ‘pondering’, but by stopping gradients, so anything learned in the initial steps is ‘free’, and so it can meta-learn to screw around and get something useful for free.)
Under a straightforward RLHF using PPO, I think there wouldn’t be much weakening because the REINFORCE operator conceptually simply rewards (or punishes) all tokens generated during an episode, without making much attempt to decide which were ‘good’ or ‘bad’. (That’s why it’s so high variance.) Any advantage function trying to remove some of the variance probably won’t do a good job.
The pondering happens in earlier layers of the network, not in the output. If the pondering has little effect on the output tokens that implies that the activation weights get multiplied with small numbers somewhere in the intermediate layers, which would also reuce the gradient.
More problematically for your idea, if the conclusions are indeed ‘unrelated to the task’, then shouldn’t they be just as likely to arise in every episode—including the ones where it got negative reward? That would seem like it ought to exactly cancel out any learning of ‘pondering’.
The pondering will only cancel out on average if pondering on topic X is not correlated with output task Y. If there is a correlation in either direction, then training on task Y could inadvertently bias the model to do more or less pondering on mostly-unrelated-but-statistically-correlated topic X.
Suppose the model realizes that the user is a member of political party X. It will adjust its responses to match what the user wants to hear. In the process of doing this, it also ends up pondering other believes about party X, but never voices them because they aren’t part of the immediate conversation.
What would be the implications? The model could develop a political bias to think more deeply about topics related to party X, where X is whatever party has more users giving the model positive feedback. Even if the other topics on party X’s agenda are never explicitly talked about (!)
Or suppose that Y is “the user asks some question about biology” and X is “ongoing pondering about building dangerous organisms.”
(Also, this assumes that RL gives an average reward of 0.0, which I don’t know if that’s true in practice because the implementation details here are not public knowledge.)
(Also, also, it’s possible that while topic X is unrelated to task Y, the decision to ponder the larger topic Z is positively correlated with both smaller-topic X and task Y. Which would make X get a positive reward on average.)
The pondering happens in earlier layers of the network, not in the output
Then how does it produce any tokens...?
then training on task Y could inadvertently bias the model to do more or less pondering on mostly-unrelated-but-statistically-correlated topic X.
But if that is what is going on and it accidentally learns to ponder initially due to bogus feedback or error, eventually the spurious correlation should be figured out by the model doing the pondering more, but it not increasing reward, and so it gets unlearned.
(Also, this assumes that RL gives an average reward of 0.0, which I don’t know if that’s true in practice.)
I think the mean would be taken out by the advantage estimation, so the RLHF continues to increase the probability of tokens being generated from the episodes with above-average reward, and punish the probability of generating the tokens from the below-average reward episodes. This is in effect as if the average reward is always 0.
What would be the implications? The model could develop a political bias to think more deeply about topics related to party X, where X is whatever party has more users giving the model positive feedback. Even if the other topics on party X’s agenda are never explicitly talked about (!)
That sounds like the pondering’s conclusions are then related to the task.
The pondering happens in earlier layers of the network, not in the output
Each layer of a transformer produces one tensor for each token seen so far, and each of those tensors can be a superposition of multiple concepts. All of this gets processed in parallel, and anything that turns out to be useless for the end results ends up getting zeroed out at some point until only the final answer remains, which gets turned into a token. The pondering can happen in early layers, overlapping with the part of the reasoning process that actually leads to results.
That sounds like the pondering’s conclusions are then related to the task.
Yes, but only very vaguely. For example, doing alignment training on questions related to bodily autonomy could end up making the model ponder about gun control, since both are political topics. You end up with a model that has increased capabilities in gun manufacturing when that has nothing to do with your training on the surface.
Under a straightforward RLHF using PPO, I think there wouldn’t be much weakening because the REINFORCE operator conceptually simply rewards (or punishes) all tokens generated during an episode, without making much attempt to decide which were ‘good’ or ‘bad’. (That’s why it’s so high variance.) Any advantage function trying to remove some of the variance probably won’t do a good job.
More problematically for your idea, if the conclusions are indeed ‘unrelated to the task’, then shouldn’t they be just as likely to arise in every episode—including the ones where it got negative reward? That would seem like it ought to exactly cancel out any learning of ‘pondering’.
You need some incentive somewhere to learn good ‘pondering’. (I have an example proposal for ‘free play’ which tries to teach a sort of ‘pondering’, but by stopping gradients, so anything learned in the initial steps is ‘free’, and so it can meta-learn to screw around and get something useful for free.)
The pondering happens in earlier layers of the network, not in the output. If the pondering has little effect on the output tokens that implies that the activation weights get multiplied with small numbers somewhere in the intermediate layers, which would also reuce the gradient.
The pondering will only cancel out on average if pondering on topic X is not correlated with output task Y. If there is a correlation in either direction, then training on task Y could inadvertently bias the model to do more or less pondering on mostly-unrelated-but-statistically-correlated topic X.
Suppose the model realizes that the user is a member of political party X. It will adjust its responses to match what the user wants to hear. In the process of doing this, it also ends up pondering other believes about party X, but never voices them because they aren’t part of the immediate conversation.
What would be the implications? The model could develop a political bias to think more deeply about topics related to party X, where X is whatever party has more users giving the model positive feedback. Even if the other topics on party X’s agenda are never explicitly talked about (!)
Or suppose that Y is “the user asks some question about biology” and X is “ongoing pondering about building dangerous organisms.”
(Also, this assumes that RL gives an average reward of 0.0, which I don’t know if that’s true in practice because the implementation details here are not public knowledge.)
(Also, also, it’s possible that while topic X is unrelated to task Y, the decision to ponder the larger topic Z is positively correlated with both smaller-topic X and task Y. Which would make X get a positive reward on average.)
Then how does it produce any tokens...?
But if that is what is going on and it accidentally learns to ponder initially due to bogus feedback or error, eventually the spurious correlation should be figured out by the model doing the pondering more, but it not increasing reward, and so it gets unlearned.
I think the mean would be taken out by the advantage estimation, so the RLHF continues to increase the probability of tokens being generated from the episodes with above-average reward, and punish the probability of generating the tokens from the below-average reward episodes. This is in effect as if the average reward is always 0.
That sounds like the pondering’s conclusions are then related to the task.
Each layer of a transformer produces one tensor for each token seen so far, and each of those tensors can be a superposition of multiple concepts. All of this gets processed in parallel, and anything that turns out to be useless for the end results ends up getting zeroed out at some point until only the final answer remains, which gets turned into a token. The pondering can happen in early layers, overlapping with the part of the reasoning process that actually leads to results.
Yes, but only very vaguely. For example, doing alignment training on questions related to bodily autonomy could end up making the model ponder about gun control, since both are political topics. You end up with a model that has increased capabilities in gun manufacturing when that has nothing to do with your training on the surface.