I see where you’re going with this mind-cancers but I’m not sure that the hypothetical modules you chose lead to an example which as a whole makes sense.
S,I, P and D together start off super-intelligent or at least highly intelligent as a unit and it starts trying to optimize the sub-parts.
Debugging your own thoughts on the fly seems like a non-starter so this AI needs to be constructing variants on itself and it’s own modules and testing them out in some kind of a sandbox to create better versions of it’s own modules.
But at the start it has functioning S,I,P and D modules. How would it end up choosing a D version 1.1 with random or semi-random outputs when it’s assessing it with D version 1.0 which does not have random outputs.
What part isn’t solved by taking the approach of “Don’t assess whether a change to yourself is successful using the new version of yourself, decide with the old version”?
It depends on whether the increase in intelligence comes from inside or outside. Some algorithms might be safe for limited resources, but become unstable if it has more resources, and this might not be easy to establish, even for the AI.
I see where you’re going with this mind-cancers but I’m not sure that the hypothetical modules you chose lead to an example which as a whole makes sense.
S,I, P and D together start off super-intelligent or at least highly intelligent as a unit and it starts trying to optimize the sub-parts.
Debugging your own thoughts on the fly seems like a non-starter so this AI needs to be constructing variants on itself and it’s own modules and testing them out in some kind of a sandbox to create better versions of it’s own modules.
But at the start it has functioning S,I,P and D modules. How would it end up choosing a D version 1.1 with random or semi-random outputs when it’s assessing it with D version 1.0 which does not have random outputs.
What part isn’t solved by taking the approach of “Don’t assess whether a change to yourself is successful using the new version of yourself, decide with the old version”?
It depends on whether the increase in intelligence comes from inside or outside. Some algorithms might be safe for limited resources, but become unstable if it has more resources, and this might not be easy to establish, even for the AI.