I think the article is good at arguing that deceptive alignment is unlikely given certain assumptions, but those assumptions may not be accurate and then the conclusion doesn’t go through. Eg, the alignment faking paper shows that deceptive alignment is possible in a scenario where the base goal has shifted (from helpful & harmless to helpful-only). This article basically assumes we won’t do that.
I’m now thinking that this article is more useful if you look at it as a set of instructions rather than a set of assumptions. I don’t know whether we will change the base goal of TAI between training episodes. But given this article and the alignment faking paper, I hope we won’t. Maybe it would also be a good idea to check for good understanding of the base goal before introducing goal-directedness, for example.
I think the article is good at arguing that deceptive alignment is unlikely given certain assumptions, but those assumptions may not be accurate and then the conclusion doesn’t go through. Eg, the alignment faking paper shows that deceptive alignment is possible in a scenario where the base goal has shifted (from helpful & harmless to helpful-only). This article basically assumes we won’t do that.
I’m now thinking that this article is more useful if you look at it as a set of instructions rather than a set of assumptions. I don’t know whether we will change the base goal of TAI between training episodes. But given this article and the alignment faking paper, I hope we won’t. Maybe it would also be a good idea to check for good understanding of the base goal before introducing goal-directedness, for example.