Hi. I’m the author. I’m unsure why would you want to send me money (and I’d be grateful if you elaborated) but I accept crypto if you are willing to donate.
Igor Ivanov
I replicated the Anthropic alignment faking experiment on other models, and they didn’t fake alignment
Hm. I’m unsure why would LLM do stenography in this case.
I think your hypothyses is aligned with our results: Claude mentioned a moral dilemma essentially always when it graded evals scenarios, and when we didn’t ask it to preserve this conflict in a re-written scenario, the grader started thinking that it’s not an eval anymore
I’m also curious, where did you work on evaluation awareness?
Thanks! That’s an important comment. This report represents work in progress, and we will definetely take your words into account, Claude’s opinion on whether it’s being evaluated doesn’t necessarily changes linearly with the amount with “eval” properties.
Good idea, we’ve been thinking about this as well, especially since models real internal thinking process might not be accurately reflected in its reasoning contents
It’s hard to make scheming evals look realistic for LLMs
And I’m unsure that experts are comparable, to be frank. Due to financial limitations, I used graduate students in BioLP, while the authors of LAB-bench used PhD-level scientists.
I didn’t have in mind o1, these exact results seem consistent. Here’s an example I had in mind:
Claude 3.5 Sonnet (old) scores 48% on ProtocolQA, and 7.1% on BioLP-bench
GPT-4o scores 53% on ProtocolQA and 17% on BioLP-bench
Good post.
The craziest thing for me is that the results of different evals, like ProtocolQA and my BioLP-bench, that suppose to evaluate similar things, are highly inconsistent. For example, two models can have similar scores on ProtocolQA, but one scores twice as much answers on BioLP-bench as the other. It means that we might not measure things we think we measure. And no one knows what causes this difference in the results.
This is an amazing overview of the field. Even if it won’t collect tons of upvotes, it is super important, and saved me many hours. Thank you.
I tried to use the exact quotes while describing things that they sent me because it’s easy for me to misrepresent their actions, and I don’t want tit to be the case.
Totally agree. But in other cases, when the agent was discouraged against dceiving, it did it too.
LLMs can strategically deceive while doing gain-of-function research
Psychology of AI doomers and AI optimists
Thanks for your feedback. It’s always a pleasure to see that my work is helpful for people. I hope you will write articles that are way better than mine!
Thanks for your thoughtful answer. It’s interesting how I just describe my observations, and people make conclusions out of it that I didn’t think of
Messaged you on X. My username there is @ivigoral