RSI might suggest a need for alignment (between the steps of its recursion), but reaching superintelligence doesn’t necessarily require that kind of RSI. Evolution built humans. A world champion AlphaZero can be obtained by scaling a tiny barely competent AlphaZero. Humans of an AI company might take many steps towards superintelligence without knowing what they are doing. A technically competent early AGI that protests against working on RSI because it’s obviously dangerous can be finetuned to stop protesting and proceed with building the next machine.
RSI might suggest a need for alignment (between the steps of its recursion), but reaching superintelligence doesn’t necessarily require that kind of RSI. Evolution built humans. A world champion AlphaZero can be obtained by scaling a tiny barely competent AlphaZero. Humans of an AI company might take many steps towards superintelligence without knowing what they are doing. A technically competent early AGI that protests against working on RSI because it’s obviously dangerous can be finetuned to stop protesting and proceed with building the next machine.