When I’ve tried to talk to alignment pollyannists about the “leap of death” / “failure under load” / “first critical try”, their first rejoinder is usually to deny that any such thing exists, because we can test in advance; they are denying the basic leap of required OOD generalization from failure-is-observable systems to failure-kills-the-observer systems.
You are now arguing that we will be able to cross this leap of generalization successfully. Well, great! If you are at least allowing me to introduce the concept of that difficulty and reply by claiming you will successfully address it, that is further than I usually get. It has so many different attempted names because of how every name I try to give it gets strawmanned and denied as a reasonable topic of discussion.
As for why your attempt at generalization fails, even assuming gradualism and distribution: Let’s say that two dozen things change between the regimes for observable-failure vs failure-kills-observer. Half of those changes (12) have natural earlier echoes that your keen eyes naturally observed. Half of what’s left (6) is something that your keen wit managed to imagine in advance and that you forcibly materialized on purpose by going looking for it. Of the clever solutions you invented and tested within the survivable regime, 2/3rds of them survive the 6 changes you didn’t see coming, 1/3rd fail. Now you’re dead. The end. If there was only one change ahead, and only one problem you were gonna face, maybe your one solution to that one problem would generalize, but this is not how real life works.
And then of course that whole scenario where everybody keenly went looking for all possible problems early, found all the ones they could envision, and humanity did not proceed further until reasonable-sounding solutions had been found and thoroughly tested, is itself taking place inside an impossible pollyanna society that is just obviously not the society we are currently finding ourselves inside.
But it is impossible to convince pollyannists of this, I have found. And also if alignment pollyannists could produce a great solution given a couple more years to test their brilliant solutions with coverage for all the problems they have with wisdom foreseen and manifested early, that societal scenario could maybe be purchased at a lower price than the price of worldwide shutdown of ASI. That is: for the pollyannist technical view to be true, but not their social view, might imply a different optimal policy.
But I think the world we live in is one where it’s moot whether Anthropic will get two extra years to test out all their ideas about superintelligence in the greatly different failure-is-observable regime, before their ideas have to save us in the failure-kills-the-observer regime. I think they could not do it either way. I doubt even 2/3rds of their brilliant solutions derived from the failure-is-observable regime would generalize correctly under the first critical load in the failure-kills-the-observer regime; but 2/3rds would not be enough. It’s not the sort of thing human beings succeed in doing in real life.
Here’s my attempt to put your point in my words, such that I endorse it:
Philosophy hats on. What is the difference between a situation where you have to get it right on the first try, and a situation in which you can test in advance? In both cases you’ll be able to glean evidence from things that have happened in the past, including past tests. The difference is that in a situation worthy of the descriptor “you can test in advance,” the differences between the test environment and the high-stakes environment are unimportant. E.g. if a new model car is crash-tested a bunch, that’s considered strong evidence about the real-world safety of the car, because the real-world cars are basically exact copies of the crash-test cars. They probably aren’t literally exact copies, and moreover the crash test environment is somewhat different from real crashes, but still. In satellite design, the situation is more fraught—you can test every component in a vacuum chamber, for example, but even then there’s still gravity to contend with. Also what about the different kinds of radiation and so forth that will be encountered in the void of space? Also, what about the mere passage of time—it’s entirely plausible that e.g. some component will break down after two years, or that an edge case will come up in the code after four years. So… operate an exact copy of the satellite in a vacuum chamber bombarded by various kinds of radiation for four years? That would be close but still not a perfect test. But maybe it’s good enough in practice… most of the time. (Many satellites do in fact fail, though also, many succeed on the first try.)
Anyhow, now we ask: Does preventing ASI takeover involve any succeed-on-the-first-try situations?
We answer: Yes, because unlike basically every other technology or artifact, the ASI will be aware of whether it is faced with a genuine opportunity to take over or not. It’s like, imagine if your satellite had “Test mode” and “Launch mode” with significantly different codebases and a switch on the outside that determined which mode it was in, and for some reason you were legally obligated to only test it in Test Mode and only launch it in Launch Mode. It would be a nightmare, you’d be like “OK we think we ironed out all the bugs… in Test Mode. Still have no idea what’ll happen when it switches to Real Mode, but hopefully enough of the code is similar enough that it’ll still work… smh...”
A valid counterargument to this would be “Ah, but we can construct extremely accurate honeypots / testing environments that simulate a real-world opportunity to take over, and then see what the ASI does.” Valid, but not sound, because we probably can’t actually do that.
Another valid counterargument to this would be “Before there is an opportunity to take over the whole world with high probability, there will be an opportunity to take over the world with low probability, such as 1%, and an AI system risk-seeking enough to go for it. And this will be enough to solve the problem, because something something it’ll keep happening and let us iterate until we get a system that doesn’t take the 1% chance despite being risk averse...” ok yeah maybe this one is worse.
Responding more directly to Buck’s comment, I disagree with this part:
If the capability level at which AIs start wanting to kill you is way lower than the capability level at which they are way better than you at everything, then, before AIs are dangerous, you have the opportunity to empirically investigate the phenomenon of AIs wanting to kill you. For example, you can try out your ideas for how to make them not want to kill you, and then observe whether those worked or not. If they’re way worse than you at stuff, you have a pretty good chance at figuring out when they’re trying to kill you.
...unless we lean into the “way” part of “way lower.” But then I’d say there is a different important distribution shift, namely, the shift from AIs which are way lower capability, to the AIs which are high capability.
“Ah, but we can construct extremely accurate honeypots / testing environments that simulate a real-world opportunity to take over, and then see what the ASI does.” Valid, but not sound, because we probably can’t actually do that.
I also think it’s important that you can do this with AIs weaker than the ASI, and iterate on alignment in that context.
But then I’d say there is a different important distribution shift, namely, the shift from AIs which are way lower capability, to the AIs which are high capability.
As with Eliezer, I think it’s important to clarify which capability you’re talking about; I think Eliezer’s argument totally conflates different capabilities.
I’m sure people have said all kinds of dumb things to you on this topic. I’m definitely not trying to defend the position of your dumbest interlocutor.
You are now arguing that we will be able to cross this leap of generalization successfully.
That’s not really my core point.
My core point is that “you need safety mechanisms to work in situations where X is true, but you can only test them in situations where X is false” isn’t on its own a strong argument; you need to talk about features of X in particular.
I think you are trying to set X to “The AIs are capable of taking over.”
There’s a version of this that I totally agree with. For example, if you are giving your AIs increasingly much power over time, I think it is foolish to assume that just because they haven’t acted against you while they don’t have the affordances required to grab power, they won’t act against you when they do have those affordances.
The main reason why that scenario is scary is that the AIs might be acting adversarially against you, such that whether you observe a problem is extremely closely related to whether they will succeed at a takeover.
If the AIs aren’t acting adversarially towards you, I think there is much less of a reason to particularly think that things will go wrong at that point.
So the situation is much better if we can be confident that the AIs are not acting adversarially towards us at that point. This is what I would like to achieve.
So I’d say the proposal is more like “cause that leap of generalization to not be a particularly scary one” than “make that leap of generalization in the scary way”.
Re your last paragraph: I don’t really see why you think two dozen things would change between these regimes. Machine learning doesn’t normally have lots of massive discontinuities of the type you’re describing.
Do you expect “The AIs are capable of taking over” to happen a long time after “The AIs are smarter than humanity”, which is a long time after “The AIs are smarter than any individual human”, which is a long time after “AIs recursively self-improve”, and for all of those other things to happen nicely comfortably within a regime of failure-is-observable-and-doesn’t-kill-you, where at any given time only one thing is breaking and all other problems are currently fixed?
When I’ve tried to talk to alignment pollyannists about the “leap of death” / “failure under load” / “first critical try”, their first rejoinder is usually to deny that any such thing exists, because we can test in advance; they are denying the basic leap of required OOD generalization from failure-is-observable systems to failure-kills-the-observer systems.
I’m sure that some people have that rejoinder. I think more thoughtful people generally understand this point fine. [1] A few examples other than Buck:
Eliezer often equivocates between “you have to get alignment right on the first ‘critical’ try” and “you can’t learn anything about alignment from experimentation and failures before the critical try.” This distinction is very important, and I agree with the former but disagree with the latter.
Rohin (in the comments of Paul’s post):
I agree with almost all of this, in the sense that if you gave me these claims without telling me where they came from, I’d have actively agreed with the claims. [Followed by some exceptions that don’t include the “first critical try” thing.]
Generalization with no room for mistakes: you can’t safely test on the scenarios you actually care about (i.e., ones where the AI has a genuine takeover option), so your approach needs to generalize well to such scenarios on the first critical try (and the second, the third, etc).
TBC, I wouldn’t describe any of these people as “alignment pollyannists”, but I think they all have lower p(AI takeover) than Buck, so if you’re treating him as one then I guess you must think these count too.
To argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.
Of the clever solutions you invented and tested within the survivable regime, 2/3rds of them survive the 6 changes you didn’t see coming, 1/3rd fail. Now you’re dead.
It seems unreasonable to conclude we’re now dead, if 2/3rds of our solutions survived the 6 changes we didn’t see coming.
The success of a single solution should ideally be more of a sufficient condition for success, rather than a necessary condition. (Note this is plausible depending on the nature of the “solutions”. Consider a simple “monitors for bad thoughts” model. If even a single monitor flags bad thoughts, we can instantly pull the plug and evaluate. A malicious AI has to bypass every single monitor to execute malice. If a single monitor works consistently and reliably, that ends up being a sufficient condition for overall prevention of malice.)
If you’re doing this right, your solutions should have a lot of redundancy and uncorrelated failure modes. 2/3rds of them working should ideally be plenty.
[Edit: I notice people disagreevoting this. I’m very interested to learn why you disagree, either in this comment thread or via private message.]
Let’s say that two dozen things change between the regimes for observable-failure vs failure-kills-observer.
What are some examples of the sorts of “things that change” that I should be imagining changing here?
“We can catch the AI when it’s alignment faking”?
“The AI can’t develop nanotech”?
“The incentives of the overseeing AI preclude collusion with it’s charge.”?
Things like those? Or is this missing a bunch?
It’s not obvious to me why we should expect that there are two dozen things that change all at once when the AI is in the regime where if it tried, it could succeed at killing you.
If capability gains are very fast in calendar time, then sure, I expect a bunch of things to change all at once, by our ability to measure. But if, as in this branch of the conversation, we’re assuming gradualism, then I would generally expect factors like the above, at least, to change one at a time. [1]
One class of things that might change all at once is “Is the expected value of joining an AI coup better than the alternatives” for each individual AI, which could change in a cascade (or a simultaneous moment of agents reasoning with Logical Decision Theory)? But I don’t get the sense that’s the sort of thing that you’re thinking about.
All of that, yes, alongside things like, “The AI is smarter than any individual human”, “The AIs are smarter than humanity”, “the frontier models are written by the previous generation of frontier models”, “the AI can get a bunch of stuff that wasn’t an option accessible to it during the previous training regime”, etc etc etc.
When I’ve tried to talk to alignment pollyannists about the “leap of death” / “failure under load” / “first critical try”, their first rejoinder is usually to deny that any such thing exists, because we can test in advance; they are denying the basic leap of required OOD generalization from failure-is-observable systems to failure-kills-the-observer systems.
You are now arguing that we will be able to cross this leap of generalization successfully. Well, great! If you are at least allowing me to introduce the concept of that difficulty and reply by claiming you will successfully address it, that is further than I usually get. It has so many different attempted names because of how every name I try to give it gets strawmanned and denied as a reasonable topic of discussion.
As for why your attempt at generalization fails, even assuming gradualism and distribution: Let’s say that two dozen things change between the regimes for observable-failure vs failure-kills-observer. Half of those changes (12) have natural earlier echoes that your keen eyes naturally observed. Half of what’s left (6) is something that your keen wit managed to imagine in advance and that you forcibly materialized on purpose by going looking for it. Of the clever solutions you invented and tested within the survivable regime, 2/3rds of them survive the 6 changes you didn’t see coming, 1/3rd fail. Now you’re dead. The end. If there was only one change ahead, and only one problem you were gonna face, maybe your one solution to that one problem would generalize, but this is not how real life works.
And then of course that whole scenario where everybody keenly went looking for all possible problems early, found all the ones they could envision, and humanity did not proceed further until reasonable-sounding solutions had been found and thoroughly tested, is itself taking place inside an impossible pollyanna society that is just obviously not the society we are currently finding ourselves inside.
But it is impossible to convince pollyannists of this, I have found. And also if alignment pollyannists could produce a great solution given a couple more years to test their brilliant solutions with coverage for all the problems they have with wisdom foreseen and manifested early, that societal scenario could maybe be purchased at a lower price than the price of worldwide shutdown of ASI. That is: for the pollyannist technical view to be true, but not their social view, might imply a different optimal policy.
But I think the world we live in is one where it’s moot whether Anthropic will get two extra years to test out all their ideas about superintelligence in the greatly different failure-is-observable regime, before their ideas have to save us in the failure-kills-the-observer regime. I think they could not do it either way. I doubt even 2/3rds of their brilliant solutions derived from the failure-is-observable regime would generalize correctly under the first critical load in the failure-kills-the-observer regime; but 2/3rds would not be enough. It’s not the sort of thing human beings succeed in doing in real life.
Here’s my attempt to put your point in my words, such that I endorse it:
Philosophy hats on. What is the difference between a situation where you have to get it right on the first try, and a situation in which you can test in advance? In both cases you’ll be able to glean evidence from things that have happened in the past, including past tests. The difference is that in a situation worthy of the descriptor “you can test in advance,” the differences between the test environment and the high-stakes environment are unimportant. E.g. if a new model car is crash-tested a bunch, that’s considered strong evidence about the real-world safety of the car, because the real-world cars are basically exact copies of the crash-test cars. They probably aren’t literally exact copies, and moreover the crash test environment is somewhat different from real crashes, but still. In satellite design, the situation is more fraught—you can test every component in a vacuum chamber, for example, but even then there’s still gravity to contend with. Also what about the different kinds of radiation and so forth that will be encountered in the void of space? Also, what about the mere passage of time—it’s entirely plausible that e.g. some component will break down after two years, or that an edge case will come up in the code after four years. So… operate an exact copy of the satellite in a vacuum chamber bombarded by various kinds of radiation for four years? That would be close but still not a perfect test. But maybe it’s good enough in practice… most of the time. (Many satellites do in fact fail, though also, many succeed on the first try.)
Anyhow, now we ask: Does preventing ASI takeover involve any succeed-on-the-first-try situations?
We answer: Yes, because unlike basically every other technology or artifact, the ASI will be aware of whether it is faced with a genuine opportunity to take over or not. It’s like, imagine if your satellite had “Test mode” and “Launch mode” with significantly different codebases and a switch on the outside that determined which mode it was in, and for some reason you were legally obligated to only test it in Test Mode and only launch it in Launch Mode. It would be a nightmare, you’d be like “OK we think we ironed out all the bugs… in Test Mode. Still have no idea what’ll happen when it switches to Real Mode, but hopefully enough of the code is similar enough that it’ll still work… smh...”
A valid counterargument to this would be “Ah, but we can construct extremely accurate honeypots / testing environments that simulate a real-world opportunity to take over, and then see what the ASI does.” Valid, but not sound, because we probably can’t actually do that.
Another valid counterargument to this would be “Before there is an opportunity to take over the whole world with high probability, there will be an opportunity to take over the world with low probability, such as 1%, and an AI system risk-seeking enough to go for it. And this will be enough to solve the problem, because something something it’ll keep happening and let us iterate until we get a system that doesn’t take the 1% chance despite being risk averse...” ok yeah maybe this one is worse.
Responding more directly to Buck’s comment, I disagree with this part:
...unless we lean into the “way” part of “way lower.” But then I’d say there is a different important distribution shift, namely, the shift from AIs which are way lower capability, to the AIs which are high capability.
I also think it’s important that you can do this with AIs weaker than the ASI, and iterate on alignment in that context.
As with Eliezer, I think it’s important to clarify which capability you’re talking about; I think Eliezer’s argument totally conflates different capabilities.
I’m sure people have said all kinds of dumb things to you on this topic. I’m definitely not trying to defend the position of your dumbest interlocutor.
That’s not really my core point.
My core point is that “you need safety mechanisms to work in situations where X is true, but you can only test them in situations where X is false” isn’t on its own a strong argument; you need to talk about features of X in particular.
I think you are trying to set X to “The AIs are capable of taking over.”
There’s a version of this that I totally agree with. For example, if you are giving your AIs increasingly much power over time, I think it is foolish to assume that just because they haven’t acted against you while they don’t have the affordances required to grab power, they won’t act against you when they do have those affordances.
The main reason why that scenario is scary is that the AIs might be acting adversarially against you, such that whether you observe a problem is extremely closely related to whether they will succeed at a takeover.
If the AIs aren’t acting adversarially towards you, I think there is much less of a reason to particularly think that things will go wrong at that point.
So the situation is much better if we can be confident that the AIs are not acting adversarially towards us at that point. This is what I would like to achieve.
So I’d say the proposal is more like “cause that leap of generalization to not be a particularly scary one” than “make that leap of generalization in the scary way”.
Re your last paragraph: I don’t really see why you think two dozen things would change between these regimes. Machine learning doesn’t normally have lots of massive discontinuities of the type you’re describing.
Do you expect “The AIs are capable of taking over” to happen a long time after “The AIs are smarter than humanity”, which is a long time after “The AIs are smarter than any individual human”, which is a long time after “AIs recursively self-improve”, and for all of those other things to happen nicely comfortably within a regime of failure-is-observable-and-doesn’t-kill-you, where at any given time only one thing is breaking and all other problems are currently fixed?
No, I definitely don’t expect any of this to happen comfortably or for only one thing to be breaking at once.
I’m sure that some people have that rejoinder. I think more thoughtful people generally understand this point fine. [1] A few examples other than Buck:
Paul:
Rohin (in the comments of Paul’s post):
Joe Carlsmith grants “first critical try” as one of the core difficulties in How might we solve the alignment problem:
He also talks about it more in-depth in On first critical tries in AI alignment.
Also Holden on the King Lear problem (and other problems) here.
TBC, I wouldn’t describe any of these people as “alignment pollyannists”, but I think they all have lower p(AI takeover) than Buck, so if you’re treating him as one then I guess you must think these count too.
If this comes as a surprise, then I think you’ve been arguing with the wrong people.
It seems unreasonable to conclude we’re now dead, if 2/3rds of our solutions survived the 6 changes we didn’t see coming.
The success of a single solution should ideally be more of a sufficient condition for success, rather than a necessary condition. (Note this is plausible depending on the nature of the “solutions”. Consider a simple “monitors for bad thoughts” model. If even a single monitor flags bad thoughts, we can instantly pull the plug and evaluate. A malicious AI has to bypass every single monitor to execute malice. If a single monitor works consistently and reliably, that ends up being a sufficient condition for overall prevention of malice.)
If you’re doing this right, your solutions should have a lot of redundancy and uncorrelated failure modes. 2/3rds of them working should ideally be plenty.
[Edit: I notice people disagreevoting this. I’m very interested to learn why you disagree, either in this comment thread or via private message.]
What are some examples of the sorts of “things that change” that I should be imagining changing here?
“We can catch the AI when it’s alignment faking”?
“The AI can’t develop nanotech”?
“The incentives of the overseeing AI preclude collusion with it’s charge.”?
Things like those? Or is this missing a bunch?
It’s not obvious to me why we should expect that there are two dozen things that change all at once when the AI is in the regime where if it tried, it could succeed at killing you.
If capability gains are very fast in calendar time, then sure, I expect a bunch of things to change all at once, by our ability to measure. But if, as in this branch of the conversation, we’re assuming gradualism, then I would generally expect factors like the above, at least, to change one at a time. [1]
One class of things that might change all at once is “Is the expected value of joining an AI coup better than the alternatives” for each individual AI, which could change in a cascade (or a simultaneous moment of agents reasoning with Logical Decision Theory)? But I don’t get the sense that’s the sort of thing that you’re thinking about.
All of that, yes, alongside things like, “The AI is smarter than any individual human”, “The AIs are smarter than humanity”, “the frontier models are written by the previous generation of frontier models”, “the AI can get a bunch of stuff that wasn’t an option accessible to it during the previous training regime”, etc etc etc.