First: you’re making reasonably-pessimistic assumptions about the AI, but very optimistic assumptions about the humans/organization. Sure, someone could look for the problem by using AIs to do research on other subject that we already know a lot about. But that’s a very expensive and complicated project—a whole field, and all the subtle hints about it, need to be removed from the training data, and then a whole new model trained! I doubt that a major lab is going to seriously take steps much cheaper and easier than that, let alone something that complicated.
One could reasonably respond “well, at least we’ve factored apart the hard technical bottleneck from the part which can be solved by smart human users or good org structure”. Which is reasonable to some extent, but also… if a product requires a user to get 100 complicated and confusing steps all correct in order for the product to work, then that’s usually best thought of as a product design problem, not a user problem. Making the plan at least somewhat robust to people behaving realistically less-than-perfectly is itself part of the problem.
Second: looking for the problem by testing on other fields itself has subtle failure modes, i.e. various ways to Not Measure What You Think You Are Measuring. A couple off-the-cuff examples:
A lab attempting this strategy brings in some string theory experts to evaluate their attempts to rederive string theory with AI assistance. But maybe (as I’ve heard claimed many times) string theory is itself an empty echo-chamber, and some form of sycophancy or telling people what they want to hear is the only way this AI-assisted attempt gets a good evaluation from the string theorists.
It turns out that fields-we-don’t-understand mostly form a natural category distinct from fields-we-do-understand, or that we don’t understand alignment precisely because our existing tools which generalize across many other fields don’t work so well on alignment. Either of those would be a (not-improbable-on-priors) specific reason to expect that our experience attempting to rederive some other field does not generalize well to alignment.
And to be clear, I don’t think of these as nitpicks, or as things which could go wrong separately from all the things originally listed. They’re just the same central kinds of failure modes showing up again, and I expect them to generalize to other hacky attempts to tackle the problem.
Third: it doesn’t really matter whether the model is trying to make it hard for us to notice the problem. What matters is (a) how likely we are to notice the problem “by default”, and (b) whether the AI makes us more or less likely to notice the problem, regardless of whether it’s trying to do so. The first story at top-of-thread is a good central example here:
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Generalizing that story to attempts to outsource alignment work to earlier AI: perhaps the path to moderately-capable intelligence looks like applying lots of search/optimization over shallow heuristics. If the selection pressure is sufficient, that system may well learn to e.g. be sycophantic in exactly the situations where it won’t be caught… though it would be “learning” a bunch of shallow heuristics with that de-facto behavior, rather than intentionally “trying” to be sycophantic in exactly those situations. Then the sycophantic-on-hard-to-verify-domains AI tells the developers that of course their favorite ideas for aligning the next generation of AI will work great, and it all goes downhill from there.
A few problems with this frame.
First: you’re making reasonably-pessimistic assumptions about the AI, but very optimistic assumptions about the humans/organization. Sure, someone could look for the problem by using AIs to do research on other subject that we already know a lot about. But that’s a very expensive and complicated project—a whole field, and all the subtle hints about it, need to be removed from the training data, and then a whole new model trained! I doubt that a major lab is going to seriously take steps much cheaper and easier than that, let alone something that complicated.
One could reasonably respond “well, at least we’ve factored apart the hard technical bottleneck from the part which can be solved by smart human users or good org structure”. Which is reasonable to some extent, but also… if a product requires a user to get 100 complicated and confusing steps all correct in order for the product to work, then that’s usually best thought of as a product design problem, not a user problem. Making the plan at least somewhat robust to people behaving realistically less-than-perfectly is itself part of the problem.
Second: looking for the problem by testing on other fields itself has subtle failure modes, i.e. various ways to Not Measure What You Think You Are Measuring. A couple off-the-cuff examples:
A lab attempting this strategy brings in some string theory experts to evaluate their attempts to rederive string theory with AI assistance. But maybe (as I’ve heard claimed many times) string theory is itself an empty echo-chamber, and some form of sycophancy or telling people what they want to hear is the only way this AI-assisted attempt gets a good evaluation from the string theorists.
It turns out that fields-we-don’t-understand mostly form a natural category distinct from fields-we-do-understand, or that we don’t understand alignment precisely because our existing tools which generalize across many other fields don’t work so well on alignment. Either of those would be a (not-improbable-on-priors) specific reason to expect that our experience attempting to rederive some other field does not generalize well to alignment.
And to be clear, I don’t think of these as nitpicks, or as things which could go wrong separately from all the things originally listed. They’re just the same central kinds of failure modes showing up again, and I expect them to generalize to other hacky attempts to tackle the problem.
Third: it doesn’t really matter whether the model is trying to make it hard for us to notice the problem. What matters is (a) how likely we are to notice the problem “by default”, and (b) whether the AI makes us more or less likely to notice the problem, regardless of whether it’s trying to do so. The first story at top-of-thread is a good central example here:
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Generalizing that story to attempts to outsource alignment work to earlier AI: perhaps the path to moderately-capable intelligence looks like applying lots of search/optimization over shallow heuristics. If the selection pressure is sufficient, that system may well learn to e.g. be sycophantic in exactly the situations where it won’t be caught… though it would be “learning” a bunch of shallow heuristics with that de-facto behavior, rather than intentionally “trying” to be sycophantic in exactly those situations. Then the sycophantic-on-hard-to-verify-domains AI tells the developers that of course their favorite ideas for aligning the next generation of AI will work great, and it all goes downhill from there.
All 3 points seem very reasonable, looking forward to Buck’s response to them.
Additionally, I am curious to hear if Ryan’s views on the topic are similar to Buck’s, given that they work at the same organization.