A non-self-modifying AI wouldn’t have any of the above problems. It would, of course, have some new problems. If it encounters a bug in itself, it won’t be able to fix itself (though it may be able to report the bug). The only way it would be able to increase its own intelligence is by improving the data it operates on. If the “data it operates on” includes a database of useful reasoning methods, then I don’t see how this would be a problem in practice.
The problem is that it would probably be overtaken by, and then be left behind by, all-machine self-improving systems. If a system is safe, but loses control over its own future, its safely becomes a worthless feature.
The short answer is “yes”—though this is more a matter of the definition of the terms than a “belief”.
In theory, you could have System A improving System B which improves System C which improves System A. No individual system is “self-improving” (though there’s a good case for the whole composite system counting as being “self-improving”).
The problem is that it would probably be overtaken by, and then be left behind by, all-machine self-improving systems. If a system is safe, but loses control over its own future, its safely becomes a worthless feature.
So you believe that a non-self-improving AI could not go foom?
The short answer is “yes”—though this is more a matter of the definition of the terms than a “belief”.
In theory, you could have System A improving System B which improves System C which improves System A. No individual system is “self-improving” (though there’s a good case for the whole composite system counting as being “self-improving”).
I guess I feel like the entire concept is too nebulous to really discuss meaningfully.