I think a very common problem in alignment research today is that people focus almost exclusively on a specific story about strategic deception/scheming, and that story is a very narrow slice of the AI extinction probability mass. At some point I should probably write a proper post on this, but for now here are few off-the-cuff example AI extinction stories which don’t look like the prototypical scheming story. (These are copied from a Facebook thread.)
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Perhaps someone trains a STEM-AGI, which can’t think about humans much at all. In the course of its work, that AGI reasons that an oxygen-rich atmosphere is very inconvenient for manufacturing, and aims to get rid of it. It doesn’t think about humans at all, but the human operators can’t understand most of the AI’s plans anyway, so the plan goes through. As an added bonus, nobody can figure out why the atmosphere is losing oxygen until it’s far too late, because the world is complicated and becomes more so with a bunch of AIs running around and no one AI has a big-picture understanding of anything either (much like today’s humans have no big-picture understanding of the whole human economy/society).
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
The classic overnight hard takeoff: a system becomes capable of self-improving at all but doesn’t seem very alarmingly good at it, somebody leaves it running overnight, exponentials kick in, and there is no morning.
(At least some) AGIs act much like a colonizing civilization. Plenty of humans ally with it, trade with it, try to get it to fight their outgroup, etc, and the AGIs locally respect the agreements with the humans and cooperate with their allies, but the end result is humanity gradually losing all control and eventually dying out.
Perhaps early AGI involves lots of moderately-intelligent subagents. The AI as a whole mostly seems pretty aligned most of the time, but at some point a particular subagent starts self-improving, goes supercritical, and takes over the rest of the system overnight. (Think cancer, but more agentic.)
Perhaps the path to superintelligence looks like scaling up o1-style runtime reasoning to the point where we’re using an LLM to simulate a whole society. But the effects of a whole society (or parts of a society) on the world are relatively decoupled from the things-individual-people-say-taken-at-face-value. For instance, lots of people talk a lot about reducing poverty, yet have basically-no effect on poverty. So developers attempt to rely on chain-of-thought transparency, and shoot themselves in the foot.
Also (separate comment because I expect this one to be more divisive): I think the scheming story has been disproportionately memetically successful largely because it’s relatively easy to imagine hacky ways of preventing an AI from intentionally scheming. And that’s mostly a bad thing; it’s a form of streetlighting.
Individually for a particular manifestation of each issue this is true, you can imagine doing a hacky solution to each one. But that assumes there is a list of such particular problems that if you check off all the boxes you win, rather than them being manifestations of broader problems. You do not want to get into a hacking contest if you’re not confident your list is complete.
True, but Buck’s claim is still relevant as a counterargument to my claim about memetic fitness of the scheming story relative to all these other stories.
This is an interesting point. I disagree that scheming vs these ideas you mention is much of a ‘streetlighting’ case. I do, however, have my own fears that ‘streetlighting’ is occurring and causing some hard-but-critical avenues of risk to be relatively neglected.
[Edit: on further thought, I think this might not just be a “streetlighting”effect, but also a “keeping my hands clean” effect. I think it’s more tempting, especially for companies, to focus on harms that could plausibly be construed as being their fault. It’s my impression that, for instance, employees of a given company might spend a disproportionate amount of time thinking about how to keep their company’s product from harming people vs the general class of products from harming people. Also, less inclined to think about harm which could be averted via application of their product. This is additional reason for concern that having the bulk of AI safety work being funded by / done in AI companies will lead to correlated oversights.]
My concerns that I think are relatively neglected in AI safety discourse are mostly related to interactions with incompetent or evil humans. Good alignment and control techniques don’t do any good if someone opts not to use them in some critical juncture.
Some potential scenarios:
If AI is very powerful, and held in check tenuously by fragile control systems, it might be released from control by a single misguided human or some unlucky chain of events, and then go rogue.
If algorithmic progress goes surprisingly quickly, we might find ourselves in a regime where a catastrophically dangerous AI can be assembled from some mix of pre-existing open-weights models, plus fine-tuning, plus new models trained with new algorithms, and probably all stitched together with hacky agent frameworks. Then all it would take would be for sufficient hints about this algorithmic discovery to leak, and someone in the world to reverse-engineer it, and then there would be potent rogue AI all over the internet all of a sudden.
If the AI is purely intent-aligned, a bad human might use it to pursue broad coercive power.
Narrow technical AI might unlock increasingly powerful and highly offense-dominant technology with lower and lower activation costs (easy to build and launch with common materials). Even if the AI itself never got out of hand, if the dangerous tech secrets got leaked (or controlled by an aggressive government) then things could go very poorly for the world.
IMO the main argument for focusing on scheming risk is that scheming is the main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful (as I discuss here). These other problems all seem like they require the models to be way smarter in order for them to be a big problem. Though as I said here, I’m excited for work on some non-scheming misalignment risks.
scheming is the main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful...
Seems quite wrong. The main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful is that they cause more powerful AIs to be built which will eventually be catastrophic, but which have problems that are not easily iterable-upon (either because problems are hidden, or things move quickly, or …).
And causing more powerful AIs to be built which will eventually be catastrophic is not something which requires a great deal of intelligent planning; humanity is already racing in that direction on its own, and it would take a great deal of intelligent planning to avert it. This story, for example:
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
This story sounds clearly extremely plausible (do you disagree with that?), involves exactly the sort of AI you’re talking about (“the first AIs that either pose substantial misalignment risk or that are extremely useful”), but the catastropic risk does not come from that AI scheming. It comes from people being dumb by default, the AI making them think it’s ok (without particularly strategizing to do so), and then people barreling ahead until it’s too late.
These other problems all seem like they require the models to be way smarter in order for them to be a big problem.
Also seems false? Some of the relevant stories:
As mentioned above, the “outsource alignment to AGI” failure-story was about exactly the level of AI you’re talking about.
In worlds where hard takeoff naturally occurs, it naturally occurs when AI is just past human level in general capabilities (and in particular AI R&D), which I expect is also roughly the same level you’re talking about (do you disagree with that?).
The story about an o1-style AI does not involve far possibilities and would very plausibly kick in at-or-before the first AIs that either pose substantial misalignment risk or that are extremely useful.
A few of the other stories also seem debatable depending on trajectory of different capabilities, but at the very least those three seem clearly potentially relevant even for the first highly dangerous or useful AIs.
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
This story sounds clearly extremely plausible (do you disagree with that?), involves exactly the sort of AI you’re talking about (“the first AIs that either pose substantial misalignment risk or that are extremely useful”), but the catastropic risk does not come from that AI scheming.
This problem seems important (e.g. it’s my last bullet here). It seems to me much easier to handle, because if this problem is present, we ought to be able to detect its presence by using AIs to do research on other subjects that we already know a lot about (e.g. the string theory analogy here). Scheming is the only reason why the model would try to make it hard for us to notice that this problem is present.
First: you’re making reasonably-pessimistic assumptions about the AI, but very optimistic assumptions about the humans/organization. Sure, someone could look for the problem by using AIs to do research on other subject that we already know a lot about. But that’s a very expensive and complicated project—a whole field, and all the subtle hints about it, need to be removed from the training data, and then a whole new model trained! I doubt that a major lab is going to seriously take steps much cheaper and easier than that, let alone something that complicated.
One could reasonably respond “well, at least we’ve factored apart the hard technical bottleneck from the part which can be solved by smart human users or good org structure”. Which is reasonable to some extent, but also… if a product requires a user to get 100 complicated and confusing steps all correct in order for the product to work, then that’s usually best thought of as a product design problem, not a user problem. Making the plan at least somewhat robust to people behaving realistically less-than-perfectly is itself part of the problem.
Second: looking for the problem by testing on other fields itself has subtle failure modes, i.e. various ways to Not Measure What You Think You Are Measuring. A couple off-the-cuff examples:
A lab attempting this strategy brings in some string theory experts to evaluate their attempts to rederive string theory with AI assistance. But maybe (as I’ve heard claimed many times) string theory is itself an empty echo-chamber, and some form of sycophancy or telling people what they want to hear is the only way this AI-assisted attempt gets a good evaluation from the string theorists.
It turns out that fields-we-don’t-understand mostly form a natural category distinct from fields-we-do-understand, or that we don’t understand alignment precisely because our existing tools which generalize across many other fields don’t work so well on alignment. Either of those would be a (not-improbable-on-priors) specific reason to expect that our experience attempting to rederive some other field does not generalize well to alignment.
And to be clear, I don’t think of these as nitpicks, or as things which could go wrong separately from all the things originally listed. They’re just the same central kinds of failure modes showing up again, and I expect them to generalize to other hacky attempts to tackle the problem.
Third: it doesn’t really matter whether the model is trying to make it hard for us to notice the problem. What matters is (a) how likely we are to notice the problem “by default”, and (b) whether the AI makes us more or less likely to notice the problem, regardless of whether it’s trying to do so. The first story at top-of-thread is a good central example here:
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Generalizing that story to attempts to outsource alignment work to earlier AI: perhaps the path to moderately-capable intelligence looks like applying lots of search/optimization over shallow heuristics. If the selection pressure is sufficient, that system may well learn to e.g. be sycophantic in exactly the situations where it won’t be caught… though it would be “learning” a bunch of shallow heuristics with that de-facto behavior, rather than intentionally “trying” to be sycophantic in exactly those situations. Then the sycophantic-on-hard-to-verify-domains AI tells the developers that of course their favorite ideas for aligning the next generation of AI will work great, and it all goes downhill from there.
One big reason I might expect an AI to do a bad job at alignment research is if it doesn’t do a good job (according to humans) of resolving cases where humans are inconsistent or disagree. How do you detect this in string theory research? Part of the reason we know so much about physics is humans aren’t that inconsistent about it and don’t disagree that much. And if you go to sub-topics where humans do disagree, how do you judge its performance (because ‘be very convincing to your operators’ is an objective with a different kind of danger).
Another potential red flag is if the AI gives humans what they ask for even when that’s ‘dumb’ according to some sophisticated understanding of human values. This could definitely show up in string theory research (note when some ideas suggest non-string-theory paradigms might be better, and push back on the humans if the humans try to ignore this), it’s just intellectually difficult (maybe easier in loop quantum gravity research heyo gottem) and not as salient without the context of alignment and human values.
See also ‘The Main Sources of AI Risk?’ by Wei Dai and Daniel Kokotajlo, which puts forward 35 routes to catastrophe (most of which are disjunctive). (Note that many of the routes involve something other than intent alignment going wrong.)
Another one: We manage to solve alignment to a significant extend. The AI who is much smarter than a human thinks that it is aligned, and takes aligned actions. The AI even predicts that it will never become unaligned to humans. However, at some point in the future as the AI naturally unrolles into a reflectively stable equilibrium it becomes unaligned.
I see a lot of discussion of AI doom stemming from research, business, and government / politics (including terrorism). Not a lot about AI doom from crime. Criminals don’t stay in the box; the whole point of crime is to benefit yourself by breaking the rules and harming others. Intentional creation of intelligent cybercrime tools — ecosystems of AI malware, exploit discovery, spearphishing, ransomware, account takeovers, etc. — seems like a path to uncontrolled evolution of explicitly hostile AGI, where a maxim of “discover the rules; break them; profit” is designed-in.
Agree on that people focus a bit too much on scheming. It might be good for some people to think a bit more about the other failure modes you described, but the main thing that needs doing is very smart people making progress towards building an aligned AI, not defending against particular failure modes. (However, most people probably cannot usefully contribute to that, so maybe focusing on failure modes is still good for most people. Only that in any case there’s the problem that people will find proposals that very likely don’t actually work but which people can rather believe in that they work, and thereby making an AI stop a bit less likely.)
My initial reaction is that at least some of these points would be covered by the Guaranteed Safe AI agenda if that works out, right? Though the “AGIs act much like a colonizing civilization” situation does scare me because it’s the kind of thing which locally looks harmless but collectively is highly dangerous. It would require no misalignment on the part of any individual AI.
Some of the stories assume a lot of AIs, wouldn’t a lot of human-level AIs be very good at creating a better AI? Also it seems implausible to me that we will get a STEM-AGI that doesn’t think about humans much but is powerful enought to get rid of atmosphere. On a different note, evaluating plausability of scenarios is a whole different thing that basically very few people do and write about in AI safety.
What I think is that there won’t be a time longer than 5 years where we have a lot of AIs and no super human AI. Basically that the first thing AIs will be used to will be self-improvement and quickly after reasonable ai agents we will get super human AI. Like 6 years.
This came from a Facebook thread where I argued that many of the main ways AI was described as failing fall into few categories (John disagreed).
I appreciated this list, but they strike me as fitting into a few clusters.
...I would flag that much of that is unsurprising to me, and I think categorization can be pretty fine.
In order:
1) If an agent is unwittingly deceptive in ways that are clearly catastrophic, and that could be understood by a regular person, I’d probably put that under the “naive” or “idiot savant” category. As in, it has severe gaps in its abilities that a human or reasonable agent wouldn’t. If the issue is that all reasonable agents wouldn’t catch the downsides of a certain plan, I’d probably put that under the “we made a pretty good bet given the intelligence that we had” category.
2) I think that “What Failure Looks Like” is less Accident risk, more “Systemic” risk. I’m also just really unsure what to think about this story. It feels to me like it’s a situation where actors are just not able to regulate externalities or similar.
3) The “fusion power generator scenario” seems like just a bad analyst to me. A lot of the job of an analyst is to flag important considerations. This seems like a pretty basic ask. For this itself to be the catastrophic part, I think we’d have to be seriously bad at this. (“i.e. Idiot Savant”)
4) STEM-AGI → I’d also put this in the naive or “idiot savant” category.
5) “that plan totally fails to align more-powerful next-gen AGI at all” → This seems orthogonal to “categorizing the types of unalignment”. This describes how incentives would create an unaligned agent, not what the specific alignment problem is. I do think it would be good to have better terminology here, but would probably consider it a bit adjacent to the specific topic of “AI alignment”—more like “AI alignment strategy/policy” or something.
6) “AGIs act much like a colonizing civilization” → This sounds like either unalignment has already happened, or humans just gave AIs their own power+rights for some reason. I agree that’s bad, but it seems like a different issue than what I think of as the alignment problem. More like, “Yea, if unaligned AIs have a lot of power and agency and different goals, that would be suboptimal”
7) “but at some point a particular subagent starts self-improving, goes supercritical, and takes over the rest of the system overnight.” → This sounds like a traditional mesa-agent failure. I expect a lot of “alignment” with a system made of a bunch of subcomponents is “making sure no subcomponents do anything terrible.” Also, still leaves open the specific way this subsystem becomes/is unaligned.
8 ) “using an LLM to simulate a whole society. ” → Sorry, I don’t quite follow this one.
Personally, I like the focus “scheming” has. At the same time, I imagine there are another 5 to 20 clean concerns we should also focus on (some of which have been getting attention).
While I realize there’s a lot we can’t predict, I think we could do a much better just making lists of different risk factors and allocating research amongst them.
I think a very common problem in alignment research today is that people focus almost exclusively on a specific story about strategic deception/scheming, and that story is a very narrow slice of the AI extinction probability mass. At some point I should probably write a proper post on this, but for now here are few off-the-cuff example AI extinction stories which don’t look like the prototypical scheming story. (These are copied from a Facebook thread.)
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
The “Getting What We Measure” scenario from Paul’s old “What Failure Looks Like” post.
The “fusion power generator scenario”.
Perhaps someone trains a STEM-AGI, which can’t think about humans much at all. In the course of its work, that AGI reasons that an oxygen-rich atmosphere is very inconvenient for manufacturing, and aims to get rid of it. It doesn’t think about humans at all, but the human operators can’t understand most of the AI’s plans anyway, so the plan goes through. As an added bonus, nobody can figure out why the atmosphere is losing oxygen until it’s far too late, because the world is complicated and becomes more so with a bunch of AIs running around and no one AI has a big-picture understanding of anything either (much like today’s humans have no big-picture understanding of the whole human economy/society).
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
The classic overnight hard takeoff: a system becomes capable of self-improving at all but doesn’t seem very alarmingly good at it, somebody leaves it running overnight, exponentials kick in, and there is no morning.
(At least some) AGIs act much like a colonizing civilization. Plenty of humans ally with it, trade with it, try to get it to fight their outgroup, etc, and the AGIs locally respect the agreements with the humans and cooperate with their allies, but the end result is humanity gradually losing all control and eventually dying out.
Perhaps early AGI involves lots of moderately-intelligent subagents. The AI as a whole mostly seems pretty aligned most of the time, but at some point a particular subagent starts self-improving, goes supercritical, and takes over the rest of the system overnight. (Think cancer, but more agentic.)
Perhaps the path to superintelligence looks like scaling up o1-style runtime reasoning to the point where we’re using an LLM to simulate a whole society. But the effects of a whole society (or parts of a society) on the world are relatively decoupled from the things-individual-people-say-taken-at-face-value. For instance, lots of people talk a lot about reducing poverty, yet have basically-no effect on poverty. So developers attempt to rely on chain-of-thought transparency, and shoot themselves in the foot.
Also (separate comment because I expect this one to be more divisive): I think the scheming story has been disproportionately memetically successful largely because it’s relatively easy to imagine hacky ways of preventing an AI from intentionally scheming. And that’s mostly a bad thing; it’s a form of streetlighting.
Most of the problems you discussed here more easily permit hacky solutions than scheming does.
Individually for a particular manifestation of each issue this is true, you can imagine doing a hacky solution to each one. But that assumes there is a list of such particular problems that if you check off all the boxes you win, rather than them being manifestations of broader problems. You do not want to get into a hacking contest if you’re not confident your list is complete.
True, but Buck’s claim is still relevant as a counterargument to my claim about memetic fitness of the scheming story relative to all these other stories.
This is an interesting point. I disagree that scheming vs these ideas you mention is much of a ‘streetlighting’ case. I do, however, have my own fears that ‘streetlighting’ is occurring and causing some hard-but-critical avenues of risk to be relatively neglected.
[Edit: on further thought, I think this might not just be a “streetlighting”effect, but also a “keeping my hands clean” effect. I think it’s more tempting, especially for companies, to focus on harms that could plausibly be construed as being their fault. It’s my impression that, for instance, employees of a given company might spend a disproportionate amount of time thinking about how to keep their company’s product from harming people vs the general class of products from harming people. Also, less inclined to think about harm which could be averted via application of their product. This is additional reason for concern that having the bulk of AI safety work being funded by / done in AI companies will lead to correlated oversights.]
My concerns that I think are relatively neglected in AI safety discourse are mostly related to interactions with incompetent or evil humans. Good alignment and control techniques don’t do any good if someone opts not to use them in some critical juncture.
Some potential scenarios:
If AI is very powerful, and held in check tenuously by fragile control systems, it might be released from control by a single misguided human or some unlucky chain of events, and then go rogue.
If algorithmic progress goes surprisingly quickly, we might find ourselves in a regime where a catastrophically dangerous AI can be assembled from some mix of pre-existing open-weights models, plus fine-tuning, plus new models trained with new algorithms, and probably all stitched together with hacky agent frameworks. Then all it would take would be for sufficient hints about this algorithmic discovery to leak, and someone in the world to reverse-engineer it, and then there would be potent rogue AI all over the internet all of a sudden.
If the AI is purely intent-aligned, a bad human might use it to pursue broad coercive power.
Narrow technical AI might unlock increasingly powerful and highly offense-dominant technology with lower and lower activation costs (easy to build and launch with common materials). Even if the AI itself never got out of hand, if the dangerous tech secrets got leaked (or controlled by an aggressive government) then things could go very poorly for the world.
IMO the main argument for focusing on scheming risk is that scheming is the main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful (as I discuss here). These other problems all seem like they require the models to be way smarter in order for them to be a big problem. Though as I said here, I’m excited for work on some non-scheming misalignment risks.
Seems quite wrong. The main plausible source of catastrophic risk from the first AIs that either pose substantial misalignment risk or that are extremely useful is that they cause more powerful AIs to be built which will eventually be catastrophic, but which have problems that are not easily iterable-upon (either because problems are hidden, or things move quickly, or …).
And causing more powerful AIs to be built which will eventually be catastrophic is not something which requires a great deal of intelligent planning; humanity is already racing in that direction on its own, and it would take a great deal of intelligent planning to avert it. This story, for example:
People try to do the whole “outsource alignment research to early AGI” thing, but the human overseers are themselves sufficiently incompetent at alignment of superintelligences that the early AGI produces a plan which looks great to the overseers (as it was trained to do), and that plan totally fails to align more-powerful next-gen AGI at all. And at that point, they’re already on the more-powerful next gen, so it’s too late.
This story sounds clearly extremely plausible (do you disagree with that?), involves exactly the sort of AI you’re talking about (“the first AIs that either pose substantial misalignment risk or that are extremely useful”), but the catastropic risk does not come from that AI scheming. It comes from people being dumb by default, the AI making them think it’s ok (without particularly strategizing to do so), and then people barreling ahead until it’s too late.
Also seems false? Some of the relevant stories:
As mentioned above, the “outsource alignment to AGI” failure-story was about exactly the level of AI you’re talking about.
In worlds where hard takeoff naturally occurs, it naturally occurs when AI is just past human level in general capabilities (and in particular AI R&D), which I expect is also roughly the same level you’re talking about (do you disagree with that?).
The story about an o1-style AI does not involve far possibilities and would very plausibly kick in at-or-before the first AIs that either pose substantial misalignment risk or that are extremely useful.
A few of the other stories also seem debatable depending on trajectory of different capabilities, but at the very least those three seem clearly potentially relevant even for the first highly dangerous or useful AIs.
This problem seems important (e.g. it’s my last bullet here). It seems to me much easier to handle, because if this problem is present, we ought to be able to detect its presence by using AIs to do research on other subjects that we already know a lot about (e.g. the string theory analogy here). Scheming is the only reason why the model would try to make it hard for us to notice that this problem is present.
A few problems with this frame.
First: you’re making reasonably-pessimistic assumptions about the AI, but very optimistic assumptions about the humans/organization. Sure, someone could look for the problem by using AIs to do research on other subject that we already know a lot about. But that’s a very expensive and complicated project—a whole field, and all the subtle hints about it, need to be removed from the training data, and then a whole new model trained! I doubt that a major lab is going to seriously take steps much cheaper and easier than that, let alone something that complicated.
One could reasonably respond “well, at least we’ve factored apart the hard technical bottleneck from the part which can be solved by smart human users or good org structure”. Which is reasonable to some extent, but also… if a product requires a user to get 100 complicated and confusing steps all correct in order for the product to work, then that’s usually best thought of as a product design problem, not a user problem. Making the plan at least somewhat robust to people behaving realistically less-than-perfectly is itself part of the problem.
Second: looking for the problem by testing on other fields itself has subtle failure modes, i.e. various ways to Not Measure What You Think You Are Measuring. A couple off-the-cuff examples:
A lab attempting this strategy brings in some string theory experts to evaluate their attempts to rederive string theory with AI assistance. But maybe (as I’ve heard claimed many times) string theory is itself an empty echo-chamber, and some form of sycophancy or telling people what they want to hear is the only way this AI-assisted attempt gets a good evaluation from the string theorists.
It turns out that fields-we-don’t-understand mostly form a natural category distinct from fields-we-do-understand, or that we don’t understand alignment precisely because our existing tools which generalize across many other fields don’t work so well on alignment. Either of those would be a (not-improbable-on-priors) specific reason to expect that our experience attempting to rederive some other field does not generalize well to alignment.
And to be clear, I don’t think of these as nitpicks, or as things which could go wrong separately from all the things originally listed. They’re just the same central kinds of failure modes showing up again, and I expect them to generalize to other hacky attempts to tackle the problem.
Third: it doesn’t really matter whether the model is trying to make it hard for us to notice the problem. What matters is (a) how likely we are to notice the problem “by default”, and (b) whether the AI makes us more or less likely to notice the problem, regardless of whether it’s trying to do so. The first story at top-of-thread is a good central example here:
Perhaps the path to superintelligence looks like applying lots of search/optimization over shallow heuristics. Then we potentially die to things which aren’t smart enough to be intentionally deceptive, but nonetheless have been selected-upon to have a lot of deceptive behaviors (via e.g. lots of RL on human feedback).
Generalizing that story to attempts to outsource alignment work to earlier AI: perhaps the path to moderately-capable intelligence looks like applying lots of search/optimization over shallow heuristics. If the selection pressure is sufficient, that system may well learn to e.g. be sycophantic in exactly the situations where it won’t be caught… though it would be “learning” a bunch of shallow heuristics with that de-facto behavior, rather than intentionally “trying” to be sycophantic in exactly those situations. Then the sycophantic-on-hard-to-verify-domains AI tells the developers that of course their favorite ideas for aligning the next generation of AI will work great, and it all goes downhill from there.
All 3 points seem very reasonable, looking forward to Buck’s response to them.
Additionally, I am curious to hear if Ryan’s views on the topic are similar to Buck’s, given that they work at the same organization.
One big reason I might expect an AI to do a bad job at alignment research is if it doesn’t do a good job (according to humans) of resolving cases where humans are inconsistent or disagree. How do you detect this in string theory research? Part of the reason we know so much about physics is humans aren’t that inconsistent about it and don’t disagree that much. And if you go to sub-topics where humans do disagree, how do you judge its performance (because ‘be very convincing to your operators’ is an objective with a different kind of danger).
Another potential red flag is if the AI gives humans what they ask for even when that’s ‘dumb’ according to some sophisticated understanding of human values. This could definitely show up in string theory research (note when some ideas suggest non-string-theory paradigms might be better, and push back on the humans if the humans try to ignore this), it’s just intellectually difficult (maybe easier in loop quantum gravity research heyo gottem) and not as salient without the context of alignment and human values.
I once counted several dozens of the ways how AI can cause human extinction, may be some ideas may help (map, text).
See also ‘The Main Sources of AI Risk?’ by Wei Dai and Daniel Kokotajlo, which puts forward 35 routes to catastrophe (most of which are disjunctive). (Note that many of the routes involve something other than intent alignment going wrong.)
Another one: We manage to solve alignment to a significant extend. The AI who is much smarter than a human thinks that it is aligned, and takes aligned actions. The AI even predicts that it will never become unaligned to humans. However, at some point in the future as the AI naturally unrolles into a reflectively stable equilibrium it becomes unaligned.
I see a lot of discussion of AI doom stemming from research, business, and government / politics (including terrorism). Not a lot about AI doom from crime. Criminals don’t stay in the box; the whole point of crime is to benefit yourself by breaking the rules and harming others. Intentional creation of intelligent cybercrime tools — ecosystems of AI malware, exploit discovery, spearphishing, ransomware, account takeovers, etc. — seems like a path to uncontrolled evolution of explicitly hostile AGI, where a maxim of “discover the rules; break them; profit” is designed-in.
Agree on that people focus a bit too much on scheming. It might be good for some people to think a bit more about the other failure modes you described, but the main thing that needs doing is very smart people making progress towards building an aligned AI, not defending against particular failure modes. (However, most people probably cannot usefully contribute to that, so maybe focusing on failure modes is still good for most people. Only that in any case there’s the problem that people will find proposals that very likely don’t actually work but which people can rather believe in that they work, and thereby making an AI stop a bit less likely.)
My initial reaction is that at least some of these points would be covered by the Guaranteed Safe AI agenda if that works out, right? Though the “AGIs act much like a colonizing civilization” situation does scare me because it’s the kind of thing which locally looks harmless but collectively is highly dangerous. It would require no misalignment on the part of any individual AI.
Some of the stories assume a lot of AIs, wouldn’t a lot of human-level AIs be very good at creating a better AI? Also it seems implausible to me that we will get a STEM-AGI that doesn’t think about humans much but is powerful enought to get rid of atmosphere. On a different note, evaluating plausability of scenarios is a whole different thing that basically very few people do and write about in AI safety.
That is a pretty reasonable assumption. AFAIK that is what the labs plan to do.
What I think is that there won’t be a time longer than 5 years where we have a lot of AIs and no super human AI. Basically that the first thing AIs will be used to will be self-improvement and quickly after reasonable ai agents we will get super human AI. Like 6 years.
This came from a Facebook thread where I argued that many of the main ways AI was described as failing fall into few categories (John disagreed).
I appreciated this list, but they strike me as fitting into a few clusters.
Personally, I like the focus “scheming” has. At the same time, I imagine there are another 5 to 20 clean concerns we should also focus on (some of which have been getting attention).
While I realize there’s a lot we can’t predict, I think we could do a much better just making lists of different risk factors and allocating research amongst them.