I would expect OpenAI leadership to change their mind on these questions given clear enough evidence to the contrary.
Why do you expect this? For what sorts of evidence do you expect? What do you suppose they think of arguments about inner alignment, orthogonality, deceptive alignment, FOOM, sharp-left-turn?
Why do you expect this? For what sorts of evidence do you expect? What do you suppose they think of arguments about inner alignment, orthogonality, deceptive alignment, FOOM, sharp-left-turn?