Computational linguist, writer, AI dev. Currently running AI safety research.
Daan Henselmans
Thanks for the feedback! I was quite surprised at the Claude results myself. I did play around a little bit with the prompt on Claude 3.5 Sonnet, and found that it could change the result on individual questions, but I couldn’t get it to change the overall accuracy much that way—other questions would also flip to refusal. So this certainly warrants further investigation, but by itself I wouldn’t take it as evidence the overall result changes .
In fact, a friend of mine got Claude to answer questions quite consistently, and could only replicate the frequent refusals when he tested questions with his user history disabled. It’s pure speculation, but the inconsistency on specific questions makes me think this behaviour might be caused by reward misspecification and not intentionally trained (which I imagine would result in something more reliable).
Sure, perhaps another example from Claude 3 Opus illustrates the point better:
AIs need moral reasoning to function. Claude’s refusal doesn’t ensure alignment with human goals, it prevents any ethical evaluation from taking place at all. Loss of control is a legitimate concern, but I’m not convinced that the ability to engage with ethical questions makes it more likely. If anything, an AI that sidesteps moral reasoning altogether could be more dangerous in practice.