Evan Hubinger (he/him/his) (evanjhub@gmail.com)
Head of Alignment Stress-Testing at Anthropic. My posts and comments are my own and do not represent Anthropic’s positions, policies, strategies, or opinions.
Previously: MIRI, OpenAI
See: “Why I’m joining Anthropic”
Selected work:
To be clear: we are most definitely not yet claiming that we have an actual safety case for why Claude is aligned. Anthropic’s RSP includes that as an obligation once we reach AI R&D-4, but currently I think you should read the model card more as “just reporting evals” than “trying to make an actual safety case”.