The short answer is “yes”—though this is more a matter of the definition of the terms than a “belief”.
In theory, you could have System A improving System B which improves System C which improves System A. No individual system is “self-improving” (though there’s a good case for the whole composite system counting as being “self-improving”).
So you believe that a non-self-improving AI could not go foom?
The short answer is “yes”—though this is more a matter of the definition of the terms than a “belief”.
In theory, you could have System A improving System B which improves System C which improves System A. No individual system is “self-improving” (though there’s a good case for the whole composite system counting as being “self-improving”).
I guess I feel like the entire concept is too nebulous to really discuss meaningfully.