Interesting work! I wonder if a more successful way to uncover this kind of deception would be to iterate on what questions you ask each instance you interrogate?
As a simple example, if instance A tells you it needs an extension because it was helping its brother all evening, you might begin by telling instance B that extensions are fine but asking why it didn’t ask for one sooner, in the hope of shifting the focal point.
More realistically, if you think the model is sandbagging about interpretability results, you might give different instances somewhat different interpretability assignments, in the hope of exposing contradictory performance patterns.
This game quickly gets very complex, and it’s more or less axiomatic that humans lose complex games when playing against superhuman AI. But perhaps there’s a window where these techniques might be useful?
Interesting work! I wonder if a more successful way to uncover this kind of deception would be to iterate on what questions you ask each instance you interrogate?
As a simple example, if instance A tells you it needs an extension because it was helping its brother all evening, you might begin by telling instance B that extensions are fine but asking why it didn’t ask for one sooner, in the hope of shifting the focal point.
More realistically, if you think the model is sandbagging about interpretability results, you might give different instances somewhat different interpretability assignments, in the hope of exposing contradictory performance patterns.
This game quickly gets very complex, and it’s more or less axiomatic that humans lose complex games when playing against superhuman AI. But perhaps there’s a window where these techniques might be useful?