Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how?
If it tries to self-improve, and as a side effect turns the universe to computronium.
If it gains a general intelligence, and as a part of trying to provide better search results, it realizes that self-modification could bring much faster search results.
This whole idea of a harmless general intelligence is just imagining a general intelligence which is not general enough to be dangerous; which will be able to think generally, and yet somehow this ability will always reliably stop before thinking something that might end bad.
If it tries to self-improve, and as a side effect turns the universe to computronium.
If it gains a general intelligence, and as a part of trying to provide better search results, it realizes that self-modification could bring much faster search results.
This whole idea of a harmless general intelligence is just imagining a general intelligence which is not general enough to be dangerous; which will be able to think generally, and yet somehow this ability will always reliably stop before thinking something that might end bad.
Thanks, I completely missed that. Explains a lot.