Control theory I think often tends to assume that you are dealing with continuous variables. Which I think the relevant properties of AIs are likely (in practice) not—even if the underlying implementation uses continuous math RSI will make finite changes and even small changes could cause large differences in results.
Also, the dynamics here are likely to depend on capability thresholds which could cause trend extrapolation to be highly misleading.
Also, note that RSI could create a feedback loop which could enhance agency including towards nonaligned goals (agentic AI convergently wants to enhance its own agency).
Also beware that agency increases may cause increases in apparent capability because of Agency Overhang.
Control theory I think often tends to assume that you are dealing with continuous variables. Which I think the relevant properties of AIs are likely (in practice) not—even if the underlying implementation uses continuous math RSI will make finite changes and even small changes could cause large differences in results.
Also, the dynamics here are likely to depend on capability thresholds which could cause trend extrapolation to be highly misleading.
Also, note that RSI could create a feedback loop which could enhance agency including towards nonaligned goals (agentic AI convergently wants to enhance its own agency).
Also beware that agency increases may cause increases in apparent capability because of Agency Overhang.