3) Because of (2), (1) is infeasible as a solution to ELK.
Disagree, same as before.
I’m not as familiar as I’d like to be with PPO, but that’s really cool! Could you link to a source where they show this about value heads? (I didn’t see anything about value heads or PPO in your linked texts.)
This is actually a consequence of the PPO update equation itself; see eq 12 in the original paper. Basically, advantage of policy π taking action a in state s to end up in new state s′ is the on-policy TD errorAπ(s,a):=(R(s)+γVπ(s′))−Vπ(s). The PPO update is proportional to the advantage, with additional terms for policy clipping so that the updated policy doesn’t go too far astray from the current policy.
So in a sparse reward regime (where R(s) is usually 0), most of the advantages are computed only as a function of the value estimator Vπ. The value estimator is itself usually a linear head on top of the base network, and it’s trained via RL.
The point of all this is that in the sparse reward regime using a common modern algorithm like PPO (as often used in RLHF), almost none of the policy gradients come directly from the reward function. Instead, we have to consider how reward events will train a value head, which concurrently trains a policy.
So if we’re reasoning about “the policy is optimized as a (mostly direct) function of how much reward it gets”, that’s a highly non-trivial claim. The claim might just be wrong. The way that a policy is optimized to output certain actions is not so trivial as “if the reward function doesn’t grade all the events properly, then the policy will be selected to exploit it”, because the reward events reinforce certain computational circuits in the value head/network, which will in turn reinforce and chisel certain circuits into the policy portion of the network. That’s what’s really happening, mechanistically.
It seems like you want to argue “PPO only chisels circuits which implement a direct translator/honest reporter, if the reinforcement schedule can perfectly judge AI honesty in any possible situation.” This claim sounds highly suspicious to me. How does our existing knowledge rule out “you provide reward events for being honest, and these events are usually correct, and the AI learns a circuit from its world-model to its outputs”?
I think the usual answer is “we want an ELK solution to work in the worst-case.” But then it’s still unclear that the “only… if” is true. I don’t think that the “if” is sufficient or necessary to get an ELK solution, and I don’t know how I could be confident about even sufficiency (whereas I do believe that it’s not necessary). “Ensure the reward function is ‘correct’ across all possible training situations” seems like a red herring to me.
Thanks for the additional effort and rephrasing!
Disagree, same as before.
This is actually a consequence of the PPO update equation itself; see eq 12 in the original paper. Basically, advantage of policy π taking action a in state s to end up in new state s′ is the on-policy TD errorAπ(s,a):=(R(s)+γVπ(s′))−Vπ(s). The PPO update is proportional to the advantage, with additional terms for policy clipping so that the updated policy doesn’t go too far astray from the current policy.
So in a sparse reward regime (where R(s) is usually 0), most of the advantages are computed only as a function of the value estimator Vπ. The value estimator is itself usually a linear head on top of the base network, and it’s trained via RL.
The point of all this is that in the sparse reward regime using a common modern algorithm like PPO (as often used in RLHF), almost none of the policy gradients come directly from the reward function. Instead, we have to consider how reward events will train a value head, which concurrently trains a policy.
So if we’re reasoning about “the policy is optimized as a (mostly direct) function of how much reward it gets”, that’s a highly non-trivial claim. The claim might just be wrong. The way that a policy is optimized to output certain actions is not so trivial as “if the reward function doesn’t grade all the events properly, then the policy will be selected to exploit it”, because the reward events reinforce certain computational circuits in the value head/network, which will in turn reinforce and chisel certain circuits into the policy portion of the network. That’s what’s really happening, mechanistically.
It seems like you want to argue “PPO only chisels circuits which implement a direct translator/honest reporter, if the reinforcement schedule can perfectly judge AI honesty in any possible situation.” This claim sounds highly suspicious to me. How does our existing knowledge rule out “you provide reward events for being honest, and these events are usually correct, and the AI learns a circuit from its world-model to its outputs”?
I think the usual answer is “we want an ELK solution to work in the worst-case.” But then it’s still unclear that the “only… if” is true. I don’t think that the “if” is sufficient or necessary to get an ELK solution, and I don’t know how I could be confident about even sufficiency (whereas I do believe that it’s not necessary). “Ensure the reward function is ‘correct’ across all possible training situations” seems like a red herring to me.