Mostly I agree with the logic of this post though it often sounds like a lot of human things that a human would do with the strength of an AI. It’s hard for me to imagine a superintelligent immortal agent with no finite self caring about hoarding resources or exerting influence on animals way down the evolutionary ladder. To a superintelligence wouldn’t actively wiping out humanity in the blink of an eye be no different than simply waiting for us to die out naturally in an arbitrary number of years? Is the assumption that the superintelligence would see humanity as a future threat?
Even so, I think this post is useful in exploring the types of AIs we’ll see in the future and ways they could be misused. For supernerds like me it’s so refreshing to see common scifi staples like misuse of genetic engineering and AI discussed in a serious and realistic way.
Mostly I agree with the logic of this post though it often sounds like a lot of human things that a human would do with the strength of an AI. It’s hard for me to imagine a superintelligent immortal agent with no finite self caring about hoarding resources or exerting influence on animals way down the evolutionary ladder. To a superintelligence wouldn’t actively wiping out humanity in the blink of an eye be no different than simply waiting for us to die out naturally in an arbitrary number of years? Is the assumption that the superintelligence would see humanity as a future threat?
Even so, I think this post is useful in exploring the types of AIs we’ll see in the future and ways they could be misused. For supernerds like me it’s so refreshing to see common scifi staples like misuse of genetic engineering and AI discussed in a serious and realistic way.