Why would the undiscovered algorithm that produces SUCH answers along with slop like 59 (vs. the right answer being 56) be bad for AI safety? Were the model allowed to think, it would’ve noticed that 59 is slop and correct it almost instantly.
OK, maybe my statement is too strong. Roughly, how I feel about it:
If you assume there are no cases where the model makes similar crazy errors when we don’t force it to answer quickly explicitly/intentionally, the perhaps it’s irrelevant.
Though it’s unclear to what extent LLMs will be always able to “take their time and think”. Sometimes you need to make the decision really fast. Doesn’t happen with the current LLMs, but quite likely will start happening in the future.
But otherwise: it would be good to be able to predict how models’ behavior might go wrong. When you give a task you understand to a human, and you can predict quite well possible mistakes they might make. In principle, LLMs could think similarly here: “aaa fast fast some number between 0 and 56 OK idk 20”. But they don’t.
Consider e.g. designing evaluations. You can’t cover all behaviors. So you cover behaviors where you expect something weird might happen. If LLMs reason in ways totally different from how humans do, this gets harder.
Or, to phrase this differently: suppose you’d want an AI system with some decent level of adversarial robustness. If there are cases where your AI system behaves in ways totally unpredictable, and you can’t find all of them, you won’t have the robustness.
(For clarity: I think the problem is not “59 instead of correct 56” but “59 instead of a wrong answer a human could give”.)
OK, maybe my statement is too strong. Roughly, how I feel about it:
If you assume there are no cases where the model makes similar crazy errors when we don’t force it to answer quickly explicitly/intentionally, the perhaps it’s irrelevant.
Though it’s unclear to what extent LLMs will be always able to “take their time and think”. Sometimes you need to make the decision really fast. Doesn’t happen with the current LLMs, but quite likely will start happening in the future.
But otherwise: it would be good to be able to predict how models’ behavior might go wrong. When you give a task you understand to a human, and you can predict quite well possible mistakes they might make. In principle, LLMs could think similarly here: “aaa fast fast some number between 0 and 56 OK idk 20”. But they don’t.
Consider e.g. designing evaluations. You can’t cover all behaviors. So you cover behaviors where you expect something weird might happen. If LLMs reason in ways totally different from how humans do, this gets harder.
Or, to phrase this differently: suppose you’d want an AI system with some decent level of adversarial robustness. If there are cases where your AI system behaves in ways totally unpredictable, and you can’t find all of them, you won’t have the robustness.
(For clarity: I think the problem is not “59 instead of correct 56” but “59 instead of a wrong answer a human could give”.)