In our experiments on both Test-driven development and Loan applications you can see that the ground truth reward goes up with MONA. The ground truth reward at step 0 represents the reward the agent would obtain if it were frozen. So this looks like your option (3), assuming that the overseer and the agent were identical. (Partly this is because we are also mixing in non-AI sources of feedback, like whether the code runs and passes the tests, and whether the AI made the correct decision on the loan, but I think this is a realistic model of future AI development.)
In Test-driven development the argument above isn’t quite correct, because we prompted the agent to be a bad programmer but didn’t do this with the reward, so the overseer is “stronger” than the agent. However this was just because the agent is already very strongly finetuned to be good at coding so there was no headroom to climb, and we wanted to demonstrate that MONA would improve things if there was headroom to climb. I would bet that if we had a powerful model that wasn’t yet finetuned strongly for coding, then we would once again see your option (3). The rewards are quite easy to provide—just whether an individual test is valid and correct—so I think a less capable model should be able to provide them, while still getting the benefits we see in the experiments we did run.
In our experiments on both Test-driven development and Loan applications you can see that the ground truth reward goes up with MONA. The ground truth reward at step 0 represents the reward the agent would obtain if it were frozen. So this looks like your option (3), assuming that the overseer and the agent were identical. (Partly this is because we are also mixing in non-AI sources of feedback, like whether the code runs and passes the tests, and whether the AI made the correct decision on the loan, but I think this is a realistic model of future AI development.)
In Test-driven development the argument above isn’t quite correct, because we prompted the agent to be a bad programmer but didn’t do this with the reward, so the overseer is “stronger” than the agent. However this was just because the agent is already very strongly finetuned to be good at coding so there was no headroom to climb, and we wanted to demonstrate that MONA would improve things if there was headroom to climb. I would bet that if we had a powerful model that wasn’t yet finetuned strongly for coding, then we would once again see your option (3). The rewards are quite easy to provide—just whether an individual test is valid and correct—so I think a less capable model should be able to provide them, while still getting the benefits we see in the experiments we did run.