This article provides object-level arguments for thinking that deceptive alignment is very unlikely.
Recently, some organizations (Redwood Research, Anthropic) have been focusing on AI control in general and avoiding deceptive alignment in particular. I would like to see future works from these organizations explaining why deceptive alignment is likely enough to spend considerable resources on it.
Overall, while I don’t agree that deceptive alignment is <1% likely, this article made me update towards deceptive alignment being somewhat less likely.
I think the article is good at arguing that deceptive alignment is unlikely given certain assumptions, but those assumptions may not be accurate and then the conclusion doesn’t go through. Eg, the alignment faking paper shows that deceptive alignment is possible in a scenario where the base goal has shifted (from helpful & harmless to helpful-only). This article basically assumes we won’t do that.
I’m now thinking that this article is more useful if you look at it as a set of instructions rather than a set of assumptions. I don’t know whether we will change the base goal of TAI between training episodes. But given this article and the alignment faking paper, I hope we won’t. Maybe it would also be a good idea to check for good understanding of the base goal before introducing goal-directedness, for example.
This article provides object-level arguments for thinking that deceptive alignment is very unlikely.
Recently, some organizations (Redwood Research, Anthropic) have been focusing on AI control in general and avoiding deceptive alignment in particular. I would like to see future works from these organizations explaining why deceptive alignment is likely enough to spend considerable resources on it.
Overall, while I don’t agree that deceptive alignment is <1% likely, this article made me update towards deceptive alignment being somewhat less likely.
I think the article is good at arguing that deceptive alignment is unlikely given certain assumptions, but those assumptions may not be accurate and then the conclusion doesn’t go through. Eg, the alignment faking paper shows that deceptive alignment is possible in a scenario where the base goal has shifted (from helpful & harmless to helpful-only). This article basically assumes we won’t do that.
I’m now thinking that this article is more useful if you look at it as a set of instructions rather than a set of assumptions. I don’t know whether we will change the base goal of TAI between training episodes. But given this article and the alignment faking paper, I hope we won’t. Maybe it would also be a good idea to check for good understanding of the base goal before introducing goal-directedness, for example.