I think you’re confusing “sharp left turn” with “treacherous turn”. The sharp left turn is a hypothesized scenario where rapid capability gain happens without generalization of alignment properties, and a treacherous turn is “a hypothetical event where an advanced AI system which has been pretending to be aligned due to its relative weakness turns on humanity once it achieves sufficient power that it can pursue its true objective without risk.”
You’re right! Corrected, thanks :)
I think you’re confusing “sharp left turn” with “treacherous turn”. The sharp left turn is a hypothesized scenario where rapid capability gain happens without generalization of alignment properties, and a treacherous turn is “a hypothetical event where an advanced AI system which has been pretending to be aligned due to its relative weakness turns on humanity once it achieves sufficient power that it can pursue its true objective without risk.”
You’re right! Corrected, thanks :)