I wrote this back in ’23 but figure I should put it up:
If a dangerous capability eval doesn’t involve fine-tuning the model to do the dangerous thing, the eval only establishes a lower bound on capability, not an upper bound or even an approximate upper bound. Why? Because:
For future powerful AGI, if it’s misaligned, there’s a good chance it’ll be able to guess that it’s being evaluated (situational awareness) and sandbag / pretend to be incapable. The “no sandbagging on checkable tasks” hypothesis depends on fine-tuning. https://www.alignmentforum.org/posts/h7QETH7GMk9HcMnHH/the-no-sandbagging-on-checkable-tasks-hypothesis Moreover we don’t know how to draw the line between systems capable of sandbagging and systems incapable of sandbagging.
Moreover, for pre-AGI systems such as current systems, that may or may not be incapable of sandbagging, there’s still the issue that prompting alone often doesn’t bring out their capabilities to the fullest extent. People are constantly coming up with better prompts that unlock new capabilities; there are tons of ‘aged like milk’ blog posts out there where someone says “look at how dumb GPT is, I prompted it to do X and it failed” and then someone else uses a better prompt and it works.
Moreover, the RLHF training labs might do might conceal capabilities. For example we’ve successfully made it hard to get GPT4 to tell people how to build bombs etc., but it totally has that capability, it’s just choosing not to exercise it because we’ve trained it to refuse. If someone finds a new ‘jailbreak’ tomorrow they might be able to get it to tell them how to make bombs again. (Currently GPT-4 explicitly refuses, which is great, but in a future regulatory regime you could imagine a lab training their system to ‘play dumb’ rather than refuse, so that they don’t trigger the DC evals.)
Moreover, even if none of the above were true, it would still be the case that your model being stolen or leaked is a possibility, and then there’ll be bad actors with access to the model. They may not have the compute, data, etc. to train their own model from scratch, but they quite likely have enough of the relevant resources to do a small fine-tuning run. So if you don’t fine-tune as part of your evals, you are in the dark about what such bad actors would be able to accomplish. Also, once we have more powerful models, if the model was misaligned and self-exfiltrated or otherwise gained control of its training process, it could do the fine-tuning itself...
Moreover, even if that doesn’t happen, the public and the government deserve to know what THE LAB is capable of. If a small amount of fine-tuning using known techniques could get the model to cause catastrophe, then the lab is capable of causing catastrophe.
Finally, even if you don’t buy any of the above, there’s a general pattern where GPT-N can do some task with a bit of fine-tuning, and GPT-N+M (for some small M possibly <1) can do it few-shot, and GPT-N+O (for some small O possibly <1) can do it zero-shot. So even if you don’t buy any of the previous reasons to make fine-tuning part of the DC evals, there is at least this: By doing fine-tuning we peer into the near future and see what the next generation or two of models might be capable of.
Strong endorse; I was discussing this with Daniel, and my read of various materials is that many labs are still not taking this as seriously as they ought to—working on a post about this, likely up next week!
I wrote this back in ’23 but figure I should put it up:
If a dangerous capability eval doesn’t involve fine-tuning the model to do the dangerous thing, the eval only establishes a lower bound on capability, not an upper bound or even an approximate upper bound. Why? Because:
For future powerful AGI, if it’s misaligned, there’s a good chance it’ll be able to guess that it’s being evaluated (situational awareness) and sandbag / pretend to be incapable. The “no sandbagging on checkable tasks” hypothesis depends on fine-tuning. https://www.alignmentforum.org/posts/h7QETH7GMk9HcMnHH/the-no-sandbagging-on-checkable-tasks-hypothesis Moreover we don’t know how to draw the line between systems capable of sandbagging and systems incapable of sandbagging.
Moreover, for pre-AGI systems such as current systems, that may or may not be incapable of sandbagging, there’s still the issue that prompting alone often doesn’t bring out their capabilities to the fullest extent. People are constantly coming up with better prompts that unlock new capabilities; there are tons of ‘aged like milk’ blog posts out there where someone says “look at how dumb GPT is, I prompted it to do X and it failed” and then someone else uses a better prompt and it works.
Moreover, the RLHF training labs might do might conceal capabilities. For example we’ve successfully made it hard to get GPT4 to tell people how to build bombs etc., but it totally has that capability, it’s just choosing not to exercise it because we’ve trained it to refuse. If someone finds a new ‘jailbreak’ tomorrow they might be able to get it to tell them how to make bombs again. (Currently GPT-4 explicitly refuses, which is great, but in a future regulatory regime you could imagine a lab training their system to ‘play dumb’ rather than refuse, so that they don’t trigger the DC evals.)
Moreover, even if none of the above were true, it would still be the case that your model being stolen or leaked is a possibility, and then there’ll be bad actors with access to the model. They may not have the compute, data, etc. to train their own model from scratch, but they quite likely have enough of the relevant resources to do a small fine-tuning run. So if you don’t fine-tune as part of your evals, you are in the dark about what such bad actors would be able to accomplish. Also, once we have more powerful models, if the model was misaligned and self-exfiltrated or otherwise gained control of its training process, it could do the fine-tuning itself...
Moreover, even if that doesn’t happen, the public and the government deserve to know what THE LAB is capable of. If a small amount of fine-tuning using known techniques could get the model to cause catastrophe, then the lab is capable of causing catastrophe.
Finally, even if you don’t buy any of the above, there’s a general pattern where GPT-N can do some task with a bit of fine-tuning, and GPT-N+M (for some small M possibly <1) can do it few-shot, and GPT-N+O (for some small O possibly <1) can do it zero-shot. So even if you don’t buy any of the previous reasons to make fine-tuning part of the DC evals, there is at least this: By doing fine-tuning we peer into the near future and see what the next generation or two of models might be capable of.
Strong endorse; I was discussing this with Daniel, and my read of various materials is that many labs are still not taking this as seriously as they ought to—working on a post about this, likely up next week!
The post is now live on Substack, and link-posted to LW:
https://stevenadler.substack.com/p/ai-companies-should-be-safety-testing
https://www.lesswrong.com/posts/tQzeafo9HjCeXn7ZF/ai-companies-should-be-safety-testing-the-most-capable