So, my take is, pure process-based feedback is probably fairly safe (from reward hacking and obfuscated CoT) the problem is that it’s not competitive. It trains imitating the teacher-model.
There’s a big difference between merely imitating a teacher model, and learning to produce outputs that the teacher model likes the most. The latter allows you to surpass the teacher, because verification is easier than generation. It’s unclear how competitive a purely process-supervised model could be, but in principle it could scale far beyond human intelligence.
Process supervision could even lead to better performance than outcome supervision in some areas, like those that don’t have a well-defined reward signal. For example, you may not be able to write a good story, but it’s relatively easy to tell which of two worldbuilding ideas you like better. It might be more efficient for an LLM to get feedback on each step it takes on a creative writing project, rather than getting a single reward at the very end for how well a human liked the result.
Process-based and outcome-based are fine in isolation, but they should not be mixed.
Process-based supervision can be fine in isolation, but only if you’re using myopic optimization as in MONA, where each step is reinforced independently of future steps. Otherwise, you’ll still get multi-step reward hacking, since the model will be motivated to set itself up for future (process-based) reward.
Here’s an idea to safely get the benefits of both unrestricted CoT and process-based supervision: in each step, the model gets a private CoT to think about how to maximize human approval, and then it presents a final step for the process reward model to check. This way, multi-step reward hacking isn’t incentivized, as in regular MONA, and single-step reward hacking can be caught by the CoT monitor. This is like your Shoggoth+Face idea, but repeated for every step in the process.
There’s a big difference between merely imitating a teacher model, and learning to produce outputs that the teacher model likes the most. The latter allows you to surpass the teacher, because verification is easier than generation. It’s unclear how competitive a purely process-supervised model could be, but in principle it could scale far beyond human intelligence.
Process supervision could even lead to better performance than outcome supervision in some areas, like those that don’t have a well-defined reward signal. For example, you may not be able to write a good story, but it’s relatively easy to tell which of two worldbuilding ideas you like better. It might be more efficient for an LLM to get feedback on each step it takes on a creative writing project, rather than getting a single reward at the very end for how well a human liked the result.
Process-based supervision can be fine in isolation, but only if you’re using myopic optimization as in MONA, where each step is reinforced independently of future steps. Otherwise, you’ll still get multi-step reward hacking, since the model will be motivated to set itself up for future (process-based) reward.
Here’s an idea to safely get the benefits of both unrestricted CoT and process-based supervision: in each step, the model gets a private CoT to think about how to maximize human approval, and then it presents a final step for the process reward model to check. This way, multi-step reward hacking isn’t incentivized, as in regular MONA, and single-step reward hacking can be caught by the CoT monitor. This is like your Shoggoth+Face idea, but repeated for every step in the process.
Both good points, thanks!