While alignment of superintelligent AI is probably unsolvable, or at least not provably solvable – the AI safety is solvable. Just prevent the creation of any advance AI, and you will get some form of AI safety. However, to prevent AGI creation, some forms of AI are needed, even if one wants to target AI labs with nukes, they still need guidance systems.
In other words, we could suppose that there is a level of AI development, which is enough to stop AGI development, but not enough to create AGI-related risks. I call it “Narrow AI Nanny”.
Similar ideas was expressed by Roman Yampolskiy here, where he wrote about “artificial stupidity” for a low impact AI; by Goertzel who wrote about AI Nanny and by Drexler wrote about comprehensive AI services as alternative to AGI.
While alignment of superintelligent AI is probably unsolvable, or at least not provably solvable – the AI safety is solvable. Just prevent the creation of any advance AI, and you will get some form of AI safety. However, to prevent AGI creation, some forms of AI are needed, even if one wants to target AI labs with nukes, they still need guidance systems.
In other words, we could suppose that there is a level of AI development, which is enough to stop AGI development, but not enough to create AGI-related risks. I call it “Narrow AI Nanny”.
Similar ideas was expressed by Roman Yampolskiy here, where he wrote about “artificial stupidity” for a low impact AI; by Goertzel who wrote about AI Nanny and by Drexler wrote about comprehensive AI services as alternative to AGI.