Why do you end up in wider basins in the reward/loss landscape? This method and eg policy gradient methods for llm RLVR are both constructing an estimate of the same quantity. Are you saying this will have higher variance? You can control variance with normal methods, and typically you want low variance.
In general evolutionary methods reward hack just as much as RL I think.
EDIT: I think I misunderstood As $\omega$ → 0, you’re just estimating $\lambda_\theta E[R]$. However, if its not very small, youre optimizing a smoothed objective. So it makes sense to me that this would encourage “wider basins”.
That said, I’m still skeptical that this would lead to less reward hacking, at least not in the general case. Like reward hacking doesn’t really seem like a more “brittle” strategy in general. Like, what makes me skeptical is that reward-hacking is not a natural category from the model/reward-functions perspective, so it doesn’t seem plausible to me that it would admit a compact description, like how sensitive the solution is to perturbations in parameter space.
Would be interesting to empirically check the reward surrounding reward hacking solutions. Should be able to plot the reward against variance and see if that’s different than other spots.
Why do you end up in wider basins in the reward/loss landscape? This method and eg policy gradient methods for llm RLVR are both constructing an estimate of the same quantity. Are you saying this will have higher variance? You can control variance with normal methods, and typically you want low variance.
In general evolutionary methods reward hack just as much as RL I think.
EDIT: I think I misunderstood As $\omega$ → 0, you’re just estimating $\lambda_\theta E[R]$. However, if its not very small, youre optimizing a smoothed objective. So it makes sense to me that this would encourage “wider basins”.
That said, I’m still skeptical that this would lead to less reward hacking, at least not in the general case. Like reward hacking doesn’t really seem like a more “brittle” strategy in general. Like, what makes me skeptical is that reward-hacking is not a natural category from the model/reward-functions perspective, so it doesn’t seem plausible to me that it would admit a compact description, like how sensitive the solution is to perturbations in parameter space.
Would be interesting to empirically check the reward surrounding reward hacking solutions. Should be able to plot the reward against variance and see if that’s different than other spots.