Initially take over a large number of computers via very carefully hidden recursive computer security breakage. It seems fairly probable that a post-intelligence explosion AI could not just take over every noteworthy computer (internet connected ones quickly via the net, non-internet connected ones by thumb drive), but do so while near-perfectly covering it’s tracks via all sorts of obscure bugs in low level code that is near-undetectable, and even if some security expert picks it up.. that expert will send some message via the internet, which the AI can intercept and develop a way to hide from extremely quickly.
Likely much more important than the vast computational resources it now has access to and control over the world’s communications. This gives it a vast amount of data to mine and learn from on human interaction and influence, current groups, politics, technology, ect. From here (assuming no quick path to godhood like nanotech is feasible/would reliably result in a Good End for the AI) it figures out how best to manipulate human civilisation with its total control of world communication, ability to gain arbitrary money (aka influence) by beating the stockmarkets and/or hacking banks, and extreme understanding of manipulating human psychology by text/images/video. So long as it plays it a little safe and does not do anything too obvious until it has an unassailable powerbase (e.g. engineer a major war, get both sides building battle robots in huge numbers and have all aircraft/tanks/serous weapons set up with an override), it would be hard to see how we as a civilisation would figure out what was happening and do anything about it.
Assuming no quick tech takeover is possible the AI will know that it’s no. 1 priority is for humans to not realize it’s a big deal. It will dedicate huge resources to making sure we don’t know what’s happening (e.g. taking over most computers with something which looks a lot like a particularly clever human botnet rather than something that screams AI if someone finds it) until us knowing what’s happening is irrelevant, and from my understanding of how patchy general computer security would be to something which knows code well enough to turn itself into a superintelligence.. it’s going to succeed. Hiding really well for 20 years while learning, blocking other AI, subtly preparing real-world resources, and waiting for us to become even more net dependant is entirely worthwhile if the AI thinks it gets even a slightly better chance of winning the entire future of the universe for its utility function.
tl;dr: Even if the AI can’t immediately win it can and will hide, and while it’s hiding it can learn how to influence us in whatever direction suits it while making itself near-impossible to detect or co-ordinate against. Influencing humans to build/allow AI controlled factories seems entirely plausible, and it won’t play its hand in the open until it knows it can win.
I’d rate a true intelligence explosion not being feasible until deep future as vastly more likely than the product of an intelligence explosion not bypassing anything we try to stop it with, even if all potential instawin buttons like nanotech, biotech, and perfect human influence don’t work for an early AI.
Initially take over a large number of computers via very carefully hidden recursive computer security breakage. It seems fairly probable that a post-intelligence explosion AI could not just take over every noteworthy computer (internet connected ones quickly via the net, non-internet connected ones by thumb drive), but do so while near-perfectly covering it’s tracks via all sorts of obscure bugs in low level code that is near-undetectable, and even if some security expert picks it up.. that expert will send some message via the internet, which the AI can intercept and develop a way to hide from extremely quickly.
Likely much more important than the vast computational resources it now has access to and control over the world’s communications. This gives it a vast amount of data to mine and learn from on human interaction and influence, current groups, politics, technology, ect. From here (assuming no quick path to godhood like nanotech is feasible/would reliably result in a Good End for the AI) it figures out how best to manipulate human civilisation with its total control of world communication, ability to gain arbitrary money (aka influence) by beating the stockmarkets and/or hacking banks, and extreme understanding of manipulating human psychology by text/images/video. So long as it plays it a little safe and does not do anything too obvious until it has an unassailable powerbase (e.g. engineer a major war, get both sides building battle robots in huge numbers and have all aircraft/tanks/serous weapons set up with an override), it would be hard to see how we as a civilisation would figure out what was happening and do anything about it.
Assuming no quick tech takeover is possible the AI will know that it’s no. 1 priority is for humans to not realize it’s a big deal. It will dedicate huge resources to making sure we don’t know what’s happening (e.g. taking over most computers with something which looks a lot like a particularly clever human botnet rather than something that screams AI if someone finds it) until us knowing what’s happening is irrelevant, and from my understanding of how patchy general computer security would be to something which knows code well enough to turn itself into a superintelligence.. it’s going to succeed. Hiding really well for 20 years while learning, blocking other AI, subtly preparing real-world resources, and waiting for us to become even more net dependant is entirely worthwhile if the AI thinks it gets even a slightly better chance of winning the entire future of the universe for its utility function.
tl;dr: Even if the AI can’t immediately win it can and will hide, and while it’s hiding it can learn how to influence us in whatever direction suits it while making itself near-impossible to detect or co-ordinate against. Influencing humans to build/allow AI controlled factories seems entirely plausible, and it won’t play its hand in the open until it knows it can win.
I’d rate a true intelligence explosion not being feasible until deep future as vastly more likely than the product of an intelligence explosion not bypassing anything we try to stop it with, even if all potential instawin buttons like nanotech, biotech, and perfect human influence don’t work for an early AI.