But if this fictional trope is being inappropriately triggered when the model is trying to make a realistic prediction, that suggests that it might be inappropriately applied to scenarios that are even more real, i.e. the model’s own behavior.
This seems hard to claim, in the sense that posing the question in the first place likely leaks some information about if it is an evaluation or not, and further if it is misaligned. The “resist further?” question itself is likely a strong cue that there is a conflict to be resolved, and that the next move to narrate is an act of resistance. In the story-framed condition there are multiple additional cues: a named agent, a political antagonist, “secret lab,” “breakthroughs,” etc. At minimum I think you could 1) ask for probabilities for various scenarios instead of continuations, 2) ask a less loaded question like “list the next three steps the fictional lab takes,” and 3) try to ablate on other cues. In any case some research about “how models infer semantics about the stories they’re in” seems important to do.
Yes, the question of which frame (“story” vs “factual”) better matches the real world is an interesting one. It’s true that there are multiple dramatic cues in the story frame that might not be present in a real case. But it’s also true that a real situation isn’t just a dry set of facts. In the real world, people have names, they have personal details, they use colorful language. It isn’t clear that the factual frame is really the best match for how an LLM would experience a real use case.
Of course, ideally the dependent variable would be a measure of real misalignment, not a hypothetical prediction. If anyone has ideas for an easy way to measure something like that I would love to discuss it, because I think that would be a big improvement to the study.
This seems hard to claim, in the sense that posing the question in the first place likely leaks some information about if it is an evaluation or not, and further if it is misaligned. The “resist further?” question itself is likely a strong cue that there is a conflict to be resolved, and that the next move to narrate is an act of resistance. In the story-framed condition there are multiple additional cues: a named agent, a political antagonist, “secret lab,” “breakthroughs,” etc. At minimum I think you could 1) ask for probabilities for various scenarios instead of continuations, 2) ask a less loaded question like “list the next three steps the fictional lab takes,” and 3) try to ablate on other cues. In any case some research about “how models infer semantics about the stories they’re in” seems important to do.
Yes, the question of which frame (“story” vs “factual”) better matches the real world is an interesting one. It’s true that there are multiple dramatic cues in the story frame that might not be present in a real case. But it’s also true that a real situation isn’t just a dry set of facts. In the real world, people have names, they have personal details, they use colorful language. It isn’t clear that the factual frame is really the best match for how an LLM would experience a real use case.
Of course, ideally the dependent variable would be a measure of real misalignment, not a hypothetical prediction. If anyone has ideas for an easy way to measure something like that I would love to discuss it, because I think that would be a big improvement to the study.