I don’t think that would work in this case. I derived the project idea from Thoughts on reward engineering, section 2. There the overseer generates rewards based on its preferences and provides these rewards to RL agents.
Suppose the training starts with the overseer generating rewards from its preferences and the agents updating their value functions accordingly. After a while the agents propose something new and the overseer generates a reward that is inconsistent with those it has generated before. But it happens that this one is the true preference and the proper fix would be to revise the earlier rewards. However, rewarded is rewarded – I guess it would be hard to reverse the corresponding changes in the value functions.
Of course one could record all actions and rewards and snapshots of the value functions, then rewind and reapply with revised rewards. But given today’s model sizes and training volumes, it’s not that straightforward.
I don’t think that would work in this case. I derived the project idea from Thoughts on reward engineering, section 2. There the overseer generates rewards based on its preferences and provides these rewards to RL agents.
Suppose the training starts with the overseer generating rewards from its preferences and the agents updating their value functions accordingly. After a while the agents propose something new and the overseer generates a reward that is inconsistent with those it has generated before. But it happens that this one is the true preference and the proper fix would be to revise the earlier rewards. However, rewarded is rewarded – I guess it would be hard to reverse the corresponding changes in the value functions.
Of course one could record all actions and rewards and snapshots of the value functions, then rewind and reapply with revised rewards. But given today’s model sizes and training volumes, it’s not that straightforward.