I expect the line to blur between introspective and extrospective RSI. For example, you could imagine AIs trained for interp to doing interp on themselves, directly interpretting their own activations/​internals and then making modifications while running.
I expect the line to blur between introspective and extrospective RSI. For example, you could imagine AIs trained for interp to doing interp on themselves, directly interpretting their own activations/​internals and then making modifications while running.
I also write about this at the very end, I do think we will eventually get RSI though this might be relatively late.