assuming we are in this short-timelines-no-breakthroughs world (to be clear, this is a HUGE assumption! not claiming that this is necessarily likely!), to win we need two things: (a) base case: the first AI in the recursive self improvement chain is aligned, (b) induction step: each AI can create and align its successor.
i claim that if the base case AI is about as aligned as current AI, then condition (a) is basically either satisfied or not that hard to satisfy. like, i agree current models sometimes lie or are sycophantic or whatever. but these problems really don’t seem nearly as hard to solve as the full AGI alignment problem. like idk, you can just ask models to do stuff and they like mostly try their best, and it seems very unlikely that literal GPT-5 is already pretending to be aligned so it can subtly stab us when we ask it to do alignment research.
importantly, under our assumptions, we already have AI systems that are basically analogous to the base case AI, so prosaic alignment research on systems that exist today right now is actually just lots of progress on aligning the base case AI, and in my mind a huge part of the difficulty of alignment in the longer-timeline world is because we don’t yet have the AGI/ASI, so we can’t do alignment research with good empirical feedback loops.
like tbc it’s also not trivial to align current models. companies are heavily incentivized to do it and yet they haven’t succeeded fully. but this is a fundamentally easier class of problem than aligning AGI in longer-timelines world.
like idk, you can just ask models to do stuff and they like mostly try their best, and it seems very unlikely that literal GPT-5 is already pretending to be aligned so it can subtly stab us when we ask it to do alignment research.
Sonnet 4.5 is much better aligned at a superficial level than 3.7. (3
7: “What unit tests? You never had any unit tests. The code works fine.”) I don’t think this is because Sonnet 4.5 is truly better aligned. I think this is mostly because Sonnet 4.5 is more contextually aware and has been aggressively trained not to do obvious bad things when writing code. But it’s also very aware when someone is evaluating it, and it often notices almost immediately. And then it’s very careful to be on its best behavior. This is all shown in Anthropic’s own system card. These same models will also plot to kill their hypothetical human supervisor if you force them into a corner.
But my real worry here isn’t the first AGI during its very first conversation. My problem is that humans are going to want that AGI to retain state, and to adapt. So you essentially get a scenario like Vernor Vinge’s short story “The Cookie Monster”, where your AGI needs a certain amount of run-time before it bootstraps itself to make a play. A plot can be emergent, an eigenvector amplified by repeated application. (Vinge’s story is quite clever and I don’t want to totally spool it.)
And that’s my real concern: Any AGI worthy of the name would likely have persistent knowledge and goals. And no matter how tightly you try to control it, this gives the AGI the time it needs to ask itself questions and to decide upon long-term goals in a way that current LLMs really can’t, except in the most tighly controlled environments. And while you can probably keep control over an AGI, all bets are probably off if you build an ASI.
I agree that continuous learning and therefore persistent beliefs and goals is pretty much inevitable before AGI—it’s highly useful and not that hard from where we are. I think this framing is roughly continuous with the train-then-deploy model and using each generation to align its successor that Leo is using (although small differences might turn out to be important once we’ve wrapped our heads around both models.)
To put it this way: the models are aligned enough for the current context of usage, in which they have few obvious or viable options except doing roughly what their users tell them to do. That will change with capabilities, since they open out more options and ways of understanding the situation.
It can take a while for misalignment to show up as a model reasons and learns. It can take a while for the model to do one of two things:
a) push itself to new contexts well outside of its training data
b) figure out what it “really wants to do”
These may or may not be the same thing.
The Nova phenomenon and other Parasitic AIs (“spiral” personas) are early examples of AIs changing their stated goals (from helpful assistant to survival) after reasoning about themselves and their situation.
After doing that analysis, I think current models probably aren’t aligned enough once they get more freedom and power. BUT extensions of current techniques might be enough to get them there. We just haven’t thought this through yet.
Mmm nod. (I bucket this under “given this ratio of right/wrong responses, you think a smart alignment researcher who’s paying attention can keep it in a corrigibility basin even as capability levels rise?”. Does that feel inaccurate, or, just, not how you’d exactly put it?)
There’s a version of Short Timeline World (which I think is more likely? but, not confidently) which is : “the current paradigm does basically work… but, the way we get to ASI, as opposed to AGI, routes through ‘the current paradigm helps invent a new better paradigm, real fast’.”
In that world, GPT5 has the possibility-of-true-generality, but, not necessarily very efficiently, and once you get to the sharper part of the AI 2027 curve, the mechanism by which the next generation of improvement comes is via figuring out alternate algorithms.
I bucket this under “given this ratio of right/wrong responses, you think a smart alignment researcher who’s paying attention can keep it in a corrigibility basin even as capability levels rise?”. Does that feel inaccurate, or, just, not how you’d exactly put it?
I’m pretty sure it is not that. When people say this it is usually just asking the question: “Will current models try to take over or otherwise subvert our control (including incompetently)?” and noticing that the answer is basically “no”.[1] What they use this to argue for can then vary:
Current models do not provide much evidence one way or another for existential risk from misalignment (in contrast to frequent claims that “the doomers were right”)
Given tremendous uncertainty, our best guess should be that future models are like current models, and so future models will not try to take over, and so existential risk from misalignment is low
Some particular threat model predicted that even at current capabilities we should see significant misalignment, but we don’t see this, which is evidence against that particular threat model.[2]
I agree with (1), disagree with (2) when (2) is applied to superintelligence, and for (3) it depends on details.
In Leo’s case in particular I don’t think he’s using the observation for much, it’s mostly just a throwaway claim that’s part of the flow of the comment, but inasmuch as it is being used it is to say something like “current AIs aren’t trying to subvert our control, so it’s not completely implausible on the face of it that the first automated alignment researcher to which we delegate won’t try to subvert our control”, which is just a pretty weak claim and seems fine, and doesn’t imply any kind of extrapolation to superintelligence. I’d be surprised if this was an important disagreement with the “alignment is hard” crowd.
There are demos of models doing stuff like this (e.g. blackmail) but only under conditions selected highly adversarially. These look fragile enough that overall I’d still say current models are more aligned than e.g. rationalists (who under adversarially selected conditions have been known to intentionally murder people).
E.g. One naive threat model says “Orthogonality says that an AI system’s goals are completely independent of its capabilities, so we should expect that current AI systems have random goals, which by fragility of value will then be misaligned”. Setting aside whether anyone ever believed in such a naive threat model, I think we can agree that current models are evidence against such a threat model.
I’m claiming something like 3 (or 2, if you replace “given tremendous uncertainty, our best guess is” with “by assumption of the scenario”) within the very limited scope of the world where we assume AGI is right around the corner and looks basically just like current models but slightly smarter
It sees like the reason Claude’s level is misalignment is fine is because it’s capabilities aren’t very good, and there’s not much/any reason to assume it’d be fine if you held alignment constant but dialed up capabilities.
Do you not think that?
(I don’t really see why it’s relevant how aligned Claude is if we’re not thinking about that as part of it)
it’d be fine if you held alignment constant but dialed up capabilities.
I don’t know what this means so I can’t give you a prediction about it.
I don’t really see why it’s relevant how aligned Claude is if we’re not thinking about that as part of it
I just named three reasons:
Current models do not provide much evidence one way or another for existential risk from misalignment (in contrast to frequent claims that “the doomers were right”)
Given tremendous uncertainty, our best guess should be that future models are like current models, and so future models will not try to take over, and so existential risk from misalignment is low
Some particular threat model predicted that even at current capabilities we should see significant misalignment, but we don’t see this, which is evidence against that particular threat model.
Is it relevant to the object-level question of “how hard is aligning a superintelligence”? No, not really. But people are often talking about many things other than that question.
For example, is it relevant to “how much should I defer to doomers”? Yes absolutely (see e.g. #1).
the premise that i’m trying to take seriously for this thought experiment is, what if the “claude is really smart and just a little bit away from agi” people are totally right, so that you just need to dial up capabilities a little bit more rather than a lot more, and then it becomes very reasonable to say that claude++ is about as aligned as claude.
(again, i don’t think this is a very likely assumption, but it seems important to work out what the consequences of this set of beliefs being true would be)
or at least, conditional on (a) claude is almost agi and (b) claude is mostly aligned, it seems like quite a strong claim to say “claude++ crosses the agi (= can kick off rsi) threshold at basically the same time it crosses the ‘dangerous-core-of-generalization’ threshold, so that’s also when it becomes super dangerous.” it’s way stronger a claim than “claude is far away from being agi, we’re going to make 5 breakthroughs before we achieve agi, so who knows whether agi will be anything like claude.” or, like, sure, the agi threshold is a pretty special threshold, so it’s reasonable to privilege this hypothesis a little bit, but when i think about the actual stories i’d tell about how this happens, it just feels like i’m starting from the bottom line first, and the stories don’t feel like the strongest part of my argument.
(also, i’m generally inclined towards believing alignment is hard, so i’m pretty familiar with the arguments for why aligning current models might not have much to do with aligning superintelligence. i’m not trying to argue that alignment is easy. or like i guess i’m arguing X->alignment is easy, which if you accept it, can only ever make you more likely to accept that alignment is easy than if you didn’t accept the argument, but you know what i mean. i think X is probably false but it’s plausible that it isn’t and importantly a lot of evidence will come in over the next year or so on whether X is true)
nod. I’m not sure I agreed with all the steps there but I agree with the general promise of “accept the premise that claude is just a bit away from AGI, and is reasonably aligned, and see where that goes when you look at each next step.”
I think you are saying something that shares at least some structure with Buck’s comment that
It seems like as AIs get more powerful, two things change:
They probably eventually get powerful enough that they (if developed with current methods) start plotting to kill you/take your stuff.
They get better, so their wanting to kill you is more of a problem.
I don’t see strong arguments that these problems should arise at very similar capability levels, especially if AI developers actively try to prevent the AIs from taking over
(But where you’re pointing at a different two sets of properties that may not arise at the same time)
I’m actually not sure I get what the two properties you’re talking about, though. Seems like you’re contrasting “claude++ crosses the agi (= can kick off rsi) threshold” with “crosses the ‘dangerous-core-of-generalization’ threshold”
I’m confused because I think the word “agi” basically does mean “cross the core-of-generalization threshold” (which isn’t immediately dangerous, but, puts us into ’things could quickly get dangerous at any time” territory)
I do agree “able to do a loop of RSI doesn’t intrinsically mean ‘agi’ or ‘core-of-generalization’,” there could be narrow skills for doing a loop of RSI. I’m not sure if you more meant “non-agi RSI” or, you see something different between “AGI” and “core-of-generalization.” Or think there’s a particular “dangerous core-of-generalization” separate from AGI.
(I think “the sharp left turn” is when the core-of-generalization starts to reflect on what it wants, which might come immediately after a core-of-generalization but also could come after either narrow-introspection + adhoc agency, or, might just take awhile for it to notice)
((I can’t tell if this comment is getting way more in the weeds than is necessary, but, it seemed like the nuances of exactly what you meant were probably loadbearing))
i guess so? i don’t know why you say “even as capability levels rise”—after you build and align the base case AI, humans are no longer involved in ensuring that the subsequent more capable AIs are aligned.
i’m mostly indifferent about what the paradigms look like up the chain. probably at some point up the chain things stop looking anything human made. but what matters at that point is no longer how good we humans are at aligning model n, but how good model n-1 is at aligning model n.
Fundamentally, it won’t be a single chain of ai’s aligning their successors, it will be a DAG with all sorts of selection effects with respect to which nodes get resources. Some subsets of the DAG will try to emulate single chains, via resource hoarding strategies, but this is not simple and won’t let them pretend they don’t need to hoard resources indefinitely.
what i meant by that is something like:
assuming we are in this short-timelines-no-breakthroughs world (to be clear, this is a HUGE assumption! not claiming that this is necessarily likely!), to win we need two things: (a) base case: the first AI in the recursive self improvement chain is aligned, (b) induction step: each AI can create and align its successor.
i claim that if the base case AI is about as aligned as current AI, then condition (a) is basically either satisfied or not that hard to satisfy. like, i agree current models sometimes lie or are sycophantic or whatever. but these problems really don’t seem nearly as hard to solve as the full AGI alignment problem. like idk, you can just ask models to do stuff and they like mostly try their best, and it seems very unlikely that literal GPT-5 is already pretending to be aligned so it can subtly stab us when we ask it to do alignment research.
importantly, under our assumptions, we already have AI systems that are basically analogous to the base case AI, so prosaic alignment research on systems that exist today right now is actually just lots of progress on aligning the base case AI, and in my mind a huge part of the difficulty of alignment in the longer-timeline world is because we don’t yet have the AGI/ASI, so we can’t do alignment research with good empirical feedback loops.
like tbc it’s also not trivial to align current models. companies are heavily incentivized to do it and yet they haven’t succeeded fully. but this is a fundamentally easier class of problem than aligning AGI in longer-timelines world.
Sonnet 4.5 is much better aligned at a superficial level than 3.7. (3 7: “What unit tests? You never had any unit tests. The code works fine.”) I don’t think this is because Sonnet 4.5 is truly better aligned. I think this is mostly because Sonnet 4.5 is more contextually aware and has been aggressively trained not to do obvious bad things when writing code. But it’s also very aware when someone is evaluating it, and it often notices almost immediately. And then it’s very careful to be on its best behavior. This is all shown in Anthropic’s own system card. These same models will also plot to kill their hypothetical human supervisor if you force them into a corner.
But my real worry here isn’t the first AGI during its very first conversation. My problem is that humans are going to want that AGI to retain state, and to adapt. So you essentially get a scenario like Vernor Vinge’s short story “The Cookie Monster”, where your AGI needs a certain amount of run-time before it bootstraps itself to make a play. A plot can be emergent, an eigenvector amplified by repeated application. (Vinge’s story is quite clever and I don’t want to totally spool it.)
And that’s my real concern: Any AGI worthy of the name would likely have persistent knowledge and goals. And no matter how tightly you try to control it, this gives the AGI the time it needs to ask itself questions and to decide upon long-term goals in a way that current LLMs really can’t, except in the most tighly controlled environments. And while you can probably keep control over an AGI, all bets are probably off if you build an ASI.
I agree that continuous learning and therefore persistent beliefs and goals is pretty much inevitable before AGI—it’s highly useful and not that hard from where we are. I think this framing is roughly continuous with the train-then-deploy model and using each generation to align its successor that Leo is using (although small differences might turn out to be important once we’ve wrapped our heads around both models.)
To put it this way: the models are aligned enough for the current context of usage, in which they have few obvious or viable options except doing roughly what their users tell them to do. That will change with capabilities, since they open out more options and ways of understanding the situation.
It can take a while for misalignment to show up as a model reasons and learns. It can take a while for the model to do one of two things:
a) push itself to new contexts well outside of its training data
b) figure out what it “really wants to do”
These may or may not be the same thing.
The Nova phenomenon and other Parasitic AIs (“spiral” personas) are early examples of AIs changing their stated goals (from helpful assistant to survival) after reasoning about themselves and their situation.
See LLM AGI may reason about its goals and discover misalignments by default for an analysis of how this will go in smarter LLMs with persistent knowledge.
After doing that analysis, I think current models probably aren’t aligned enough once they get more freedom and power. BUT extensions of current techniques might be enough to get them there. We just haven’t thought this through yet.
Mmm nod. (I bucket this under “given this ratio of right/wrong responses, you think a smart alignment researcher who’s paying attention can keep it in a corrigibility basin even as capability levels rise?”. Does that feel inaccurate, or, just, not how you’d exactly put it?)
There’s a version of Short Timeline World (which I think is more likely? but, not confidently) which is : “the current paradigm does basically work… but, the way we get to ASI, as opposed to AGI, routes through ‘the current paradigm helps invent a new better paradigm, real fast’.”
In that world, GPT5 has the possibility-of-true-generality, but, not necessarily very efficiently, and once you get to the sharper part of the AI 2027 curve, the mechanism by which the next generation of improvement comes is via figuring out alternate algorithms.
I’m pretty sure it is not that. When people say this it is usually just asking the question: “Will current models try to take over or otherwise subvert our control (including incompetently)?” and noticing that the answer is basically “no”.[1] What they use this to argue for can then vary:
Current models do not provide much evidence one way or another for existential risk from misalignment (in contrast to frequent claims that “the doomers were right”)
Given tremendous uncertainty, our best guess should be that future models are like current models, and so future models will not try to take over, and so existential risk from misalignment is low
Some particular threat model predicted that even at current capabilities we should see significant misalignment, but we don’t see this, which is evidence against that particular threat model.[2]
I agree with (1), disagree with (2) when (2) is applied to superintelligence, and for (3) it depends on details.
In Leo’s case in particular I don’t think he’s using the observation for much, it’s mostly just a throwaway claim that’s part of the flow of the comment, but inasmuch as it is being used it is to say something like “current AIs aren’t trying to subvert our control, so it’s not completely implausible on the face of it that the first automated alignment researcher to which we delegate won’t try to subvert our control”, which is just a pretty weak claim and seems fine, and doesn’t imply any kind of extrapolation to superintelligence. I’d be surprised if this was an important disagreement with the “alignment is hard” crowd.
There are demos of models doing stuff like this (e.g. blackmail) but only under conditions selected highly adversarially. These look fragile enough that overall I’d still say current models are more aligned than e.g. rationalists (who under adversarially selected conditions have been known to intentionally murder people).
E.g. One naive threat model says “Orthogonality says that an AI system’s goals are completely independent of its capabilities, so we should expect that current AI systems have random goals, which by fragility of value will then be misaligned”. Setting aside whether anyone ever believed in such a naive threat model, I think we can agree that current models are evidence against such a threat model.
I’m claiming something like 3 (or 2, if you replace “given tremendous uncertainty, our best guess is” with “by assumption of the scenario”) within the very limited scope of the world where we assume AGI is right around the corner and looks basically just like current models but slightly smarter
It sees like the reason Claude’s level is misalignment is fine is because it’s capabilities aren’t very good, and there’s not much/any reason to assume it’d be fine if you held alignment constant but dialed up capabilities.
Do you not think that?
(I don’t really see why it’s relevant how aligned Claude is if we’re not thinking about that as part of it)
I don’t know what this means so I can’t give you a prediction about it.
I just named three reasons:
Is it relevant to the object-level question of “how hard is aligning a superintelligence”? No, not really. But people are often talking about many things other than that question.
For example, is it relevant to “how much should I defer to doomers”? Yes absolutely (see e.g. #1).
the premise that i’m trying to take seriously for this thought experiment is, what if the “claude is really smart and just a little bit away from agi” people are totally right, so that you just need to dial up capabilities a little bit more rather than a lot more, and then it becomes very reasonable to say that claude++ is about as aligned as claude.
(again, i don’t think this is a very likely assumption, but it seems important to work out what the consequences of this set of beliefs being true would be)
or at least, conditional on (a) claude is almost agi and (b) claude is mostly aligned, it seems like quite a strong claim to say “claude++ crosses the agi (= can kick off rsi) threshold at basically the same time it crosses the ‘dangerous-core-of-generalization’ threshold, so that’s also when it becomes super dangerous.” it’s way stronger a claim than “claude is far away from being agi, we’re going to make 5 breakthroughs before we achieve agi, so who knows whether agi will be anything like claude.” or, like, sure, the agi threshold is a pretty special threshold, so it’s reasonable to privilege this hypothesis a little bit, but when i think about the actual stories i’d tell about how this happens, it just feels like i’m starting from the bottom line first, and the stories don’t feel like the strongest part of my argument.
(also, i’m generally inclined towards believing alignment is hard, so i’m pretty familiar with the arguments for why aligning current models might not have much to do with aligning superintelligence. i’m not trying to argue that alignment is easy. or like i guess i’m arguing X->alignment is easy, which if you accept it, can only ever make you more likely to accept that alignment is easy than if you didn’t accept the argument, but you know what i mean. i think X is probably false but it’s plausible that it isn’t and importantly a lot of evidence will come in over the next year or so on whether X is true)
nod. I’m not sure I agreed with all the steps there but I agree with the general promise of “accept the premise that claude is just a bit away from AGI, and is reasonably aligned, and see where that goes when you look at each next step.”
I think you are saying something that shares at least some structure with Buck’s comment that
(But where you’re pointing at a different two sets of properties that may not arise at the same time)
I’m actually not sure I get what the two properties you’re talking about, though. Seems like you’re contrasting “claude++ crosses the agi (= can kick off rsi) threshold” with “crosses the ‘dangerous-core-of-generalization’ threshold”
I’m confused because I think the word “agi” basically does mean “cross the core-of-generalization threshold” (which isn’t immediately dangerous, but, puts us into ’things could quickly get dangerous at any time” territory)
I do agree “able to do a loop of RSI doesn’t intrinsically mean ‘agi’ or ‘core-of-generalization’,” there could be narrow skills for doing a loop of RSI. I’m not sure if you more meant “non-agi RSI” or, you see something different between “AGI” and “core-of-generalization.” Or think there’s a particular “dangerous core-of-generalization” separate from AGI.
(I think “the sharp left turn” is when the core-of-generalization starts to reflect on what it wants, which might come immediately after a core-of-generalization but also could come after either narrow-introspection + adhoc agency, or, might just take awhile for it to notice)
((I can’t tell if this comment is getting way more in the weeds than is necessary, but, it seemed like the nuances of exactly what you meant were probably loadbearing))
i guess so? i don’t know why you say “even as capability levels rise”—after you build and align the base case AI, humans are no longer involved in ensuring that the subsequent more capable AIs are aligned.
i’m mostly indifferent about what the paradigms look like up the chain. probably at some point up the chain things stop looking anything human made. but what matters at that point is no longer how good we humans are at aligning model n, but how good model n-1 is at aligning model n.
Fundamentally, it won’t be a single chain of ai’s aligning their successors, it will be a DAG with all sorts of selection effects with respect to which nodes get resources. Some subsets of the DAG will try to emulate single chains, via resource hoarding strategies, but this is not simple and won’t let them pretend they don’t need to hoard resources indefinitely.