We’ve also been seeing a similar vulnerability in our debate work, where when we use a weaker LLM as the judge, it is very hard to get the judge to take seriously the possibility that the debaters might be lying to it (using only prompting). As a result it often just believes the dishonest debater despite clear evidence to the contrary and gets the judgment wrong.
This problem mostly goes away if we switch to a more capable LLM as the judge model. My sense is that there’s some baseline level of capability that is required to be a good monitor (similarly to discussions of universality thresholds at the human level), which includes the capacity to consider the hypothesis that the information you’ve been given is false, and that non-frontier LLMs often don’t meet it.
(I think we haven’t tried finetuning the judge, at least we haven’t tried very hard, so that might fix it as well)
So I’d be interested if you see the same problem if you switch to a frontier model and do a little bit of iteration on prompting the monitor. This will of course mean that you no longer have a weaker monitor and a stronger policy, so it loses that aspect of the analogy to the situation we will face in the future. But if in fact the problem is more that you need to pass some absolute threshold of capability, rather than have some relative level of capability, then it’s most important to ensure that the monitor is past that threshold, rather than to maintain the weak/strong gap.
Great work!
We’ve also been seeing a similar vulnerability in our debate work, where when we use a weaker LLM as the judge, it is very hard to get the judge to take seriously the possibility that the debaters might be lying to it (using only prompting). As a result it often just believes the dishonest debater despite clear evidence to the contrary and gets the judgment wrong.
This problem mostly goes away if we switch to a more capable LLM as the judge model. My sense is that there’s some baseline level of capability that is required to be a good monitor (similarly to discussions of universality thresholds at the human level), which includes the capacity to consider the hypothesis that the information you’ve been given is false, and that non-frontier LLMs often don’t meet it.
(I think we haven’t tried finetuning the judge, at least we haven’t tried very hard, so that might fix it as well)
So I’d be interested if you see the same problem if you switch to a frontier model and do a little bit of iteration on prompting the monitor. This will of course mean that you no longer have a weaker monitor and a stronger policy, so it loses that aspect of the analogy to the situation we will face in the future. But if in fact the problem is more that you need to pass some absolute threshold of capability, rather than have some relative level of capability, then it’s most important to ensure that the monitor is past that threshold, rather than to maintain the weak/strong gap.