scheming is the main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful...
Seems quite wrong. The main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful is that they cause more powerful AIs to be built which will eventually be catastrophic, but which have problems that are not easily iterable-upon (either because problems are hidden, or things move quickly, or …).
And causing more powerful AIs to be built which will eventually be catastrophic is not something which requires a great deal of intelligent planning; humanity is already racing in that direction on its own, and it would take a great deal of intelligent planning to avert it. This story, for example:
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
This story sounds clearly extremely plausible (do you disagree with that?), involves exactly the sort of AI you’re talking about (“the first AIs that either pose substantial misalignment risk or that are extremely useful”), but the catastropic risk does not come from that AI scheming. It comes from people being dumb by default, the AI making them think it’s ok (without particularly strategizing to do so), and then people barreling ahead until it’s too late.
These other problems all seem like they require the models to be way smarter in order for them to be a big problem.
Also seems false? Some of the relevant stories:
As mentioned above, the “outsource alignment to AGI” failure-story was about exactly the level of AI you’re talking about.
In worlds where hard takeoff naturally occurs, it naturally occurs when AI is just past human level in general capabilities (and in particular AI R&D), which I expect is also roughly the same level you’re talking about (do you disagree with that?).
The story about an o1-style AI does not involve far possibilities and would very plausibly kick in at-or-before the first AIs that either pose substantial misalignment risk or that are extremely useful.
A few of the other stories also seem debatable depending on trajectory of different capabilities, but at the very least those three seem clearly potentially relevant even for the first highly dangerous or useful AIs.
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
This story sounds clearly extremely plausible (do you disagree with that?), involves exactly the sort of AI you’re talking about (“the first AIs that either pose substantial misalignment risk or that are extremely useful”), but the catastropic risk does not come from that AI scheming.
This problem seems important (e.g. it’s my last bullet here). It seems to me much easier to handle, because if this problem is present, we ought to be able to detect its presence by using AIs to do research on other subjects that we already know a lot about (e.g. the string theory analogy here). Scheming is the only reason why the model would try to make it hard for us to notice that this problem is present.
First: you’re making reasonably-pessimistic assumptions about the AI, but very optimistic assumptions about the humans/organization. Sure, someone could look for the problem by using AIs to do research on other subject that we already know a lot about. But that’s a very expensive and complicated project—a whole field, and all the subtle hints about it, need to be removed from the training data, and then a whole new model trained! I doubt that a major lab is going to seriously take steps much cheaper and easier than that, let alone something that complicated.
One could reasonably respond “well, at least we’ve factored apart the hard technical bottleneck from the part which can be solved by smart human users or good org structure”. Which is reasonable to some extent, but also… if a product requires a user to get 100 complicated and confusing steps all correct in order for the product to work, then that’s usually best thought of as a product design problem, not a user problem. Making the plan at least somewhat robust to people behaving realistically less-than-perfectly is itself part of the problem.
Second: looking for the problem by testing on other fields itself has subtle failure modes, i.e. various ways to Not Measure What You Think You Are Measuring. A couple off-the-cuff examples:
A lab attempting this strategy brings in some string theory experts to evaluate their attempts to rederive string theory with AI assistance. But maybe (as I’ve heard claimed many times) string theory is itself an empty echo-chamber, and some form of sycophancy or telling people what they want to hear is the only way this AI-assisted attempt gets a good evaluation from the string theorists.
It turns out that fields-we-don’t-understand mostly form a natural category distinct from fields-we-do-understand, or that we don’t understand alignment precisely because our existing tools which generalize across many other fields don’t work so well on alignment. Either of those would be a (not-improbable-on-priors) specific reason to expect that our experience attempting to rederive some other field does not generalize well to alignment.
And to be clear, I don’t think of these as nitpicks, or as things which could go wrong separately from all the things originally listed. They’re just the same central kinds of failure modes showing up again, and I expect them to generalize to other hacky attempts to tackle the problem.
Third: it doesn’t really matter whether the model is trying to make it hard for us to notice the problem. What matters is (a) how likely we are to notice the problem “by default”, and (b) whether the AI makes us more or less likely to notice the problem, regardless of whether it’s trying to do so. The first story at top-of-thread is a good central example here:
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Generalizing that story to attempts to outsource alignment work to earlier AI: perhaps the path to moderately-capable intelligence looks like applying lots of search/optimization over shallow heuristics. If the selection pressure is sufficient, that system may well learn to e.g. be sycophantic in exactly the situations where it won’t be caught… though it would be “learning” a bunch of shallow heuristics with that de-facto behavior, rather than intentionally “trying” to be sycophantic in exactly those situations. Then the sycophantic-on-hard-to-verify-domains AI tells the developers that of course their favorite ideas for aligning the next generation of AI will work great, and it all goes downhill from there.
One big reason I might expect an AI to do a bad job at alignment research is if it doesn’t do a good job (according to humans) of resolving cases where humans are inconsistent or disagree. How do you detect this in string theory research? Part of the reason we know so much about physics is humans aren’t that inconsistent about it and don’t disagree that much. And if you go to sub-topics where humans do disagree, how do you judge its performance (because ‘be very convincing to your operators’ is an objective with a different kind of danger).
Another potential red flag is if the AI gives humans what they ask for even when that’s ‘dumb’ according to some sophisticated understanding of human values. This could definitely show up in string theory research (note when some ideas suggest non-string-theory paradigms might be better, and push back on the humans if the humans try to ignore this), it’s just intellectually difficult (maybe easier in loop quantum gravity research heyo gottem) and not as salient without the context of alignment and human values.
Seems quite wrong. The main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful is that they cause more powerful AIs to be built which will eventually be catastrophic, but which have problems that are not easily iterable-upon (either because problems are hidden, or things move quickly, or …).
And causing more powerful AIs to be built which will eventually be catastrophic is not something which requires a great deal of intelligent planning; humanity is already racing in that direction on its own, and it would take a great deal of intelligent planning to avert it. This story, for example:
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
This story sounds clearly extremely plausible (do you disagree with that?), involves exactly the sort of AI you’re talking about (“the first AIs that either pose substantial misalignment risk or that are extremely useful”), but the catastropic risk does not come from that AI scheming. It comes from people being dumb by default, the AI making them think it’s ok (without particularly strategizing to do so), and then people barreling ahead until it’s too late.
Also seems false? Some of the relevant stories:
As mentioned above, the “outsource alignment to AGI” failure-story was about exactly the level of AI you’re talking about.
In worlds where hard takeoff naturally occurs, it naturally occurs when AI is just past human level in general capabilities (and in particular AI R&D), which I expect is also roughly the same level you’re talking about (do you disagree with that?).
The story about an o1-style AI does not involve far possibilities and would very plausibly kick in at-or-before the first AIs that either pose substantial misalignment risk or that are extremely useful.
A few of the other stories also seem debatable depending on trajectory of different capabilities, but at the very least those three seem clearly potentially relevant even for the first highly dangerous or useful AIs.
This problem seems important (e.g. it’s my last bullet here). It seems to me much easier to handle, because if this problem is present, we ought to be able to detect its presence by using AIs to do research on other subjects that we already know a lot about (e.g. the string theory analogy here). Scheming is the only reason why the model would try to make it hard for us to notice that this problem is present.
A few problems with this frame.
First: you’re making reasonably-pessimistic assumptions about the AI, but very optimistic assumptions about the humans/organization. Sure, someone could look for the problem by using AIs to do research on other subject that we already know a lot about. But that’s a very expensive and complicated project—a whole field, and all the subtle hints about it, need to be removed from the training data, and then a whole new model trained! I doubt that a major lab is going to seriously take steps much cheaper and easier than that, let alone something that complicated.
One could reasonably respond “well, at least we’ve factored apart the hard technical bottleneck from the part which can be solved by smart human users or good org structure”. Which is reasonable to some extent, but also… if a product requires a user to get 100 complicated and confusing steps all correct in order for the product to work, then that’s usually best thought of as a product design problem, not a user problem. Making the plan at least somewhat robust to people behaving realistically less-than-perfectly is itself part of the problem.
Second: looking for the problem by testing on other fields itself has subtle failure modes, i.e. various ways to Not Measure What You Think You Are Measuring. A couple off-the-cuff examples:
A lab attempting this strategy brings in some string theory experts to evaluate their attempts to rederive string theory with AI assistance. But maybe (as I’ve heard claimed many times) string theory is itself an empty echo-chamber, and some form of sycophancy or telling people what they want to hear is the only way this AI-assisted attempt gets a good evaluation from the string theorists.
It turns out that fields-we-don’t-understand mostly form a natural category distinct from fields-we-do-understand, or that we don’t understand alignment precisely because our existing tools which generalize across many other fields don’t work so well on alignment. Either of those would be a (not-improbable-on-priors) specific reason to expect that our experience attempting to rederive some other field does not generalize well to alignment.
And to be clear, I don’t think of these as nitpicks, or as things which could go wrong separately from all the things originally listed. They’re just the same central kinds of failure modes showing up again, and I expect them to generalize to other hacky attempts to tackle the problem.
Third: it doesn’t really matter whether the model is trying to make it hard for us to notice the problem. What matters is (a) how likely we are to notice the problem “by default”, and (b) whether the AI makes us more or less likely to notice the problem, regardless of whether it’s trying to do so. The first story at top-of-thread is a good central example here:
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Generalizing that story to attempts to outsource alignment work to earlier AI: perhaps the path to moderately-capable intelligence looks like applying lots of search/optimization over shallow heuristics. If the selection pressure is sufficient, that system may well learn to e.g. be sycophantic in exactly the situations where it won’t be caught… though it would be “learning” a bunch of shallow heuristics with that de-facto behavior, rather than intentionally “trying” to be sycophantic in exactly those situations. Then the sycophantic-on-hard-to-verify-domains AI tells the developers that of course their favorite ideas for aligning the next generation of AI will work great, and it all goes downhill from there.
All 3 points seem very reasonable, looking forward to Buck’s response to them.
Additionally, I am curious to hear if Ryan’s views on the topic are similar to Buck’s, given that they work at the same organization.
One big reason I might expect an AI to do a bad job at alignment research is if it doesn’t do a good job (according to humans) of resolving cases where humans are inconsistent or disagree. How do you detect this in string theory research? Part of the reason we know so much about physics is humans aren’t that inconsistent about it and don’t disagree that much. And if you go to sub-topics where humans do disagree, how do you judge its performance (because ‘be very convincing to your operators’ is an objective with a different kind of danger).
Another potential red flag is if the AI gives humans what they ask for even when that’s ‘dumb’ according to some sophisticated understanding of human values. This could definitely show up in string theory research (note when some ideas suggest non-string-theory paradigms might be better, and push back on the humans if the humans try to ignore this), it’s just intellectually difficult (maybe easier in loop quantum gravity research heyo gottem) and not as salient without the context of alignment and human values.