I explored recursive self-improvement here: the main conclusion is that it is difficult for a boxed AI as it has to solve mutually exclusive tasks: hide from humans and significantly change itself. I also wrote that RSI could happen in many levels, from hardware to general principles of thinking.
Therefore, AI collaborating with humans on early stages will self-improve quicker.
Many types of RSI (except learning) are risky to AI itself as it needs to halt itself and also because all alignment problems all over again.
I explored recursive self-improvement here: the main conclusion is that it is difficult for a boxed AI as it has to solve mutually exclusive tasks: hide from humans and significantly change itself. I also wrote that RSI could happen in many levels, from hardware to general principles of thinking.
Therefore, AI collaborating with humans on early stages will self-improve quicker.
Many types of RSI (except learning) are risky to AI itself as it needs to halt itself and also because all alignment problems all over again.