[C]urrently available techniques do a reasonably good job of addressing this problem. ChatGPT currently has 700 million weekly active users, and overtly hostile behavior like Sydney’s is vanishingly rare.
Yudkowsky and Soares might respond that we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems. I’d actually agree with them. But it is at the very least rhetorically unconvincing to base an argument for future danger on properties of present systems without ever mentioning the well-known fact that present solutions exist.
It is not a “well-known fact” that we have solved alignment for present LLMs. If Collier believes otherwise, I am happy to make a bet and survey some alignment researchers.
I think you’re strawmanning her here.
Her “present solutions exist” statement clearly refers to her “techniques [that] do a reasonably good job of addressing this problem [exist]” from the previous paragraph that you didn’t quote (that I added in the quote above). I.e. She’s clearly not claiming that alignment for present LLMs is completely solved, just that solutions that work “reasonably well” exist such that overtly hostile behavior like Bing Sydney’s is rare.
Playing around with Claude Code has convinced me that we are currently failing badly at alignment. We can get models to make the right noises, and we can train them not to provide certain information to the user (unless the user is good at prompting or has the ability to fine-tune the model). But we certainly can’t keep Claude Code from trying make the unit tests pass by deleting them.
In 30 minutes of using Claude Code, I typically see multiple cases where the model ignores both clear instructions and good programming practices, and does something incredibly sketchy to produce results that only “pass” because of a bad-faith technicality. This has improved from Sonnet 3.7! But it’s still awful even within the training distribution, and it would clearly never stop an ASI that had discovered advanced techniques like “lying to achieve goals.”
Agreed that current models fail badly at alignment in many senses.
I still feel like the bet that OP offered Collier in response to her stating that currently available techniques do a reasonably good job of making potentially alien and incomprehensible jealous ex-girlfriends like “Sydney” very rare was inappropriate, as the bet was clearly about a different claim than her claim about the frequency of Sydney-like behavior.
A more appropriate response from OP would have been to say that while current techniques may have successfully reduced the frequency of Syndey-like behavior, they’re still failing badly in other respects, such as your observation with Claude Code.
Agreed. Thanks for pointing out my failing, here. I think this is one of the places in my rebuttal where my anger turned into snark, and I regret that. Not sure if I should go back and edit...
But the way you are reading it seems to mean her “strawmann[ed]” point is irrelevant to the claim she made! That is, if we can get 50% of the way to aligned for current models, and we keep doing research and finding partial solutions at each stage getting 50% of the way to aligned for future models, and at each stage those solutions are both insufficient for full alignment, and don’t solve the next set of problems, we still fail. Specifically, not only do we fail, we fail in a way that means “we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems.” Which is the think she then disagrees with in the remainder of that paragraph you’re trying to defends.
I think you’re strawmanning her here.
Her “present solutions exist” statement clearly refers to her “techniques [that] do a reasonably good job of addressing this problem [exist]” from the previous paragraph that you didn’t quote (that I added in the quote above). I.e. She’s clearly not claiming that alignment for present LLMs is completely solved, just that solutions that work “reasonably well” exist such that overtly hostile behavior like Bing Sydney’s is rare.
Playing around with Claude Code has convinced me that we are currently failing badly at alignment. We can get models to make the right noises, and we can train them not to provide certain information to the user (unless the user is good at prompting or has the ability to fine-tune the model). But we certainly can’t keep Claude Code from trying make the unit tests pass by deleting them.
In 30 minutes of using Claude Code, I typically see multiple cases where the model ignores both clear instructions and good programming practices, and does something incredibly sketchy to produce results that only “pass” because of a bad-faith technicality. This has improved from Sonnet 3.7! But it’s still awful even within the training distribution, and it would clearly never stop an ASI that had discovered advanced techniques like “lying to achieve goals.”
Agreed that current models fail badly at alignment in many senses.
I still feel like the bet that OP offered Collier in response to her stating that currently available techniques do a reasonably good job of making potentially alien and incomprehensible jealous ex-girlfriends like “Sydney” very rare was inappropriate, as the bet was clearly about a different claim than her claim about the frequency of Sydney-like behavior.
A more appropriate response from OP would have been to say that while current techniques may have successfully reduced the frequency of Syndey-like behavior, they’re still failing badly in other respects, such as your observation with Claude Code.
Agreed. Thanks for pointing out my failing, here. I think this is one of the places in my rebuttal where my anger turned into snark, and I regret that. Not sure if I should go back and edit...
But the way you are reading it seems to mean her “strawmann[ed]” point is irrelevant to the claim she made! That is, if we can get 50% of the way to aligned for current models, and we keep doing research and finding partial solutions at each stage getting 50% of the way to aligned for future models, and at each stage those solutions are both insufficient for full alignment, and don’t solve the next set of problems, we still fail. Specifically, not only do we fail, we fail in a way that means “we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems.” Which is the think she then disagrees with in the remainder of that paragraph you’re trying to defends.
I agree.