Discussions about AIs modifying their own source code always remind me of Reflections on Trusting Trust, which demonstrates an evil self-modifying (or self-preserving, I should say) backdoored compiler.
(for my part, I’m mostly convinced that self-improving AI is incredibly dangerous; but not that it is likely to happen in my lifetime.)
Discussions about AIs modifying their own source code always remind me of Reflections on Trusting Trust, which demonstrates an evil self-modifying (or self-preserving, I should say) backdoored compiler.
(for my part, I’m mostly convinced that self-improving AI is incredibly dangerous; but not that it is likely to happen in my lifetime.)