Who watches the watchers? Who grades the graders?
If the RL graders are upvoting slop, seems like we need to one level more meta and upgrade the RL graders. This seems like a straightforward engineering problem, and I suspect the negative outcomes we’ve been seeing recently aren’t so much due to the inherent intractability of doing this well, but due to the companies racing and cutting corners on quality control.
Contrast with something like:
Problem of Human Limitations: how do we get the model to do things so hard no human can do them? How do we rate the quality of their outputs when no human is qualified to judge them?
Problem of Optimization for Subversion: if we have directly misaligned goals like “lie to me in ways that make me happy” and also “never appear to be lying to me, I hate thinking I’m being lied to” then we get a sneaky sycophant. Our reward process actively selects for this problem, straightforwardly improving the reward process would make the problem worse rather than better.
Who watches the watchers? Who grades the graders? If the RL graders are upvoting slop, seems like we need to one level more meta and upgrade the RL graders. This seems like a straightforward engineering problem, and I suspect the negative outcomes we’ve been seeing recently aren’t so much due to the inherent intractability of doing this well, but due to the companies racing and cutting corners on quality control.
Contrast with something like: Problem of Human Limitations: how do we get the model to do things so hard no human can do them? How do we rate the quality of their outputs when no human is qualified to judge them?
Problem of Optimization for Subversion: if we have directly misaligned goals like “lie to me in ways that make me happy” and also “never appear to be lying to me, I hate thinking I’m being lied to” then we get a sneaky sycophant. Our reward process actively selects for this problem, straightforwardly improving the reward process would make the problem worse rather than better.