AI safety & alignment researcher
In Rob Bensinger’s typology: AGI-alarmed, tentative welfarist, and eventualist.
Public stance: AI companies are doing their best to build ASI (AI much smarter than humans), and have a chance of succeeding. No one currently knows how to build ASI without an unacceptable level of existential risk (> 5%). Therefore, companies should be forbidden from building ASI until we know how to do it safely.
I have signed no contracts or agreements whose existence I cannot mention.
Thanks for answering so many questions about this. I can see why it makes sense to filter on text from the evals. What’s the rationale for not also filtering on the canary string as a precaution? I realize there would be some false positives due to abuse, but is that common enough that it would have a significant inappropriate effect?
I think of the canary string as being useful because it communicates that some researcher has judged the document as likely to corrupt eval / benchmark results. Searching for specific text from evals doesn’t seem like a full substitute for that judgment.
To be clear, I’m not asking you to justify or defend the decision; I just would like to better understand GDM’s thinking here.