Two years later, GPT7 comes up with superhumanly-convincing safety measures XYZ. These inadequate standards become the dominant safety paradigm. At this point if you try to publish “belief propagation” it gets drowned out in the noise anyway.
Some relatively short time later, there are no humans.
I think that, if there are no humans, then slop must not be too bad. AIs that produce incoherent superficially-appealing slop are not successfully accomplishing ambitious nontrivial goals right?
Maybe “some relatively short time later” was confusing. I mean long enough for the development cycle to churn a couple more times.
IE, GPT7 convinces people of sloppy safety measures XYZ, people implement XYZ and continue scaling up AGI, the scaled-up superintelligence is a schemer.
(Or maybe you’re treating it as a “capabilities elicitation” issue? Like, the AI knows all sorts of things, but when we ask, we get sycophantic slop answers? But then we should just say that the AI is mediocre in effect. Even if there’s secretly a super-powerful AI hidden inside, who cares? Unless the AI starts scheming, but I thought AI scheming was out-of-scope for this post.)
I do somewhat think of this as a capabilities elicitation issue. I think current training methods are eliciting convincingness, sycophantism, and motivated cognition (for some unknown combination of the obvious reasons and not-so-obvious reasons).
But, as clarified above, the idea isn’t that sloppy AI is hiding a super-powerful AI inside. It’s more about convincingness outpacing truthfulness. I think that is a well-established trend. I think many people expect “reasoning models” to reverse that trend. My experience so far suggests otherwise.
I would have said “More powerful AI (if aligned) helps everybody make less mistakes. Less powerful AI convinces lots of people to make more mistakes.” Right?
What I’m saying is that “aligned” isn’t the most precise concept to apply here. If scheming is the dominant concern, yes. If not, then the precisely correct concept seems closer to the “coherence” idea I’m trying to gesture at.
I’ve watched (over Discord) a developer get excited about a supposed full-stack AI development tool which develops a whole application for you based on a prompt, try a few simple examples and exclaim that it is like magic, then over the course of a few more hours issue progressive updates of “I’m a little less excited now” until they’ve updated to a very low level of excitement and have decided that it seems like magic mainly because it has been optimized to work well for the sorts of simple examples developers might try first when putting it through its paces.
I’m basically extrapolating that sort of thing forward, to cases where you only realize something was bad after months or years instead of hours. As development of these sorts of tools continues to move forward, they’ll start to succeed in impressing on the days & weeks timespan. A big assumption of my model is that to do that, they don’t need to fundamentally solve the bad-at-extrapolation problem (hallucinations, etc); they can instead do it in a way that goodharts on the sorts of feedback they’re getting.
Alignment is broad enough that I can understand classifying this sort of failure as “alignment failure” but I don’t think it is the most precise description.
If the AI is producing slop, then why is there a self-improvement dial? Why wouldn’t its self-improvement ideas be things that sound good but don’t actually work, just as its safety ideas are?
This does seem possible, but I don’t find it probable. Self-improvement ideas can be rapidly tested for their immediate impacts, but checking their long-term impacts is harder. Therefore, AI slop can generate many non-working self-improvements that just get discarded and that’s fine; it’s the apparently-working self-improvement ideas that cause problems down the line. Similarly, the AI itself can more easily train on short-term impacts of proposed improvements; so the AI might have a lot less slop when reasoning about these short-term impacts, due to getting that feedback.
(Notice how I am avoiding phrasing it like “the sloppy AI can be good at capabilities but bad at alignment because capabilities are easier to train on than alignment, due to better feedback”. Instead, focusing on short-term impacts vs long-term impacts seems to carve closer to the joints of reality.)
Sloppy AIs are nonetheless fluent with respect to existing knowledge or things that we can get good-quality feedback for, but have trouble extrapolating correctly. Your scenario, where the sloppy AI can’t help with self-improvement of any kind, suggests a world where there is no low-hanging fruit via applying existing ideas to improve the AI, or applying the kinds of skills which can be developed with good feedback. This seems possible but not especially plausible.
But if we do have early transformative AI assistants, then the default expectation is that they will fail to solve the ASI alignment problem until it’s too late. Maybe those AIs will fail to solve the problem by outputting convincing-but-wrong slop, or maybe they’ll fail to solve it by outputting “I don’t know”, or maybe they’ll fail to solve it by being misaligned, a.k.a. a failure of “capabilities elicitation”. Who cares? What matters is that they fail to solve it. Because people (and/or the early transformative AI assistants) will build ASI anyway.
I think this is a significant point wrt my position. I think my position depends to some extent on the claim that it is much better for early TAI to say “I don’t know” as opposed to outputting convincing slop. If leading AI labs are so bullish that they don’t care whether their own AI thinks it is safe to proceed, then I agree that sharing almost any capability-relevant insights with these labs is a bad idea.
Maybe “some relatively short time later” was confusing. I mean long enough for the development cycle to churn a couple more times.
IE, GPT7 convinces people of sloppy safety measures XYZ, people implement XYZ and continue scaling up AGI, the scaled-up superintelligence is a schemer.
I do somewhat think of this as a capabilities elicitation issue. I think current training methods are eliciting convincingness, sycophantism, and motivated cognition (for some unknown combination of the obvious reasons and not-so-obvious reasons).
But, as clarified above, the idea isn’t that sloppy AI is hiding a super-powerful AI inside. It’s more about convincingness outpacing truthfulness. I think that is a well-established trend. I think many people expect “reasoning models” to reverse that trend. My experience so far suggests otherwise.
What I’m saying is that “aligned” isn’t the most precise concept to apply here. If scheming is the dominant concern, yes. If not, then the precisely correct concept seems closer to the “coherence” idea I’m trying to gesture at.
I’ve watched (over Discord) a developer get excited about a supposed full-stack AI development tool which develops a whole application for you based on a prompt, try a few simple examples and exclaim that it is like magic, then over the course of a few more hours issue progressive updates of “I’m a little less excited now” until they’ve updated to a very low level of excitement and have decided that it seems like magic mainly because it has been optimized to work well for the sorts of simple examples developers might try first when putting it through its paces.
I’m basically extrapolating that sort of thing forward, to cases where you only realize something was bad after months or years instead of hours. As development of these sorts of tools continues to move forward, they’ll start to succeed in impressing on the days & weeks timespan. A big assumption of my model is that to do that, they don’t need to fundamentally solve the bad-at-extrapolation problem (hallucinations, etc); they can instead do it in a way that goodharts on the sorts of feedback they’re getting.
Alignment is broad enough that I can understand classifying this sort of failure as “alignment failure” but I don’t think it is the most precise description.
This does seem possible, but I don’t find it probable. Self-improvement ideas can be rapidly tested for their immediate impacts, but checking their long-term impacts is harder. Therefore, AI slop can generate many non-working self-improvements that just get discarded and that’s fine; it’s the apparently-working self-improvement ideas that cause problems down the line. Similarly, the AI itself can more easily train on short-term impacts of proposed improvements; so the AI might have a lot less slop when reasoning about these short-term impacts, due to getting that feedback.
(Notice how I am avoiding phrasing it like “the sloppy AI can be good at capabilities but bad at alignment because capabilities are easier to train on than alignment, due to better feedback”. Instead, focusing on short-term impacts vs long-term impacts seems to carve closer to the joints of reality.)
Sloppy AIs are nonetheless fluent with respect to existing knowledge or things that we can get good-quality feedback for, but have trouble extrapolating correctly. Your scenario, where the sloppy AI can’t help with self-improvement of any kind, suggests a world where there is no low-hanging fruit via applying existing ideas to improve the AI, or applying the kinds of skills which can be developed with good feedback. This seems possible but not especially plausible.
I think this is a significant point wrt my position. I think my position depends to some extent on the claim that it is much better for early TAI to say “I don’t know” as opposed to outputting convincing slop. If leading AI labs are so bullish that they don’t care whether their own AI thinks it is safe to proceed, then I agree that sharing almost any capability-relevant insights with these labs is a bad idea.