When an AGI recursively self improves, is it improving just it’s software or is it improving the hardware too? Is it acquiring more hardware (e.g. by creating a botnet on the internet)? Is it making algorithmic improvements? Which improvements are responsible for the biggest order-of-magnitude increases in the AI’s total power?
Any or all of the above. I do expect better software to be the first, and probably most important step. There are lots of possible scenarios, but the most dangerous seem to be those where AI can greatly improve upon software to make much better use of existing hardware.
We are almost certainly not using anywhere near the best possible algorithms to turn computing power into intelligent behaviour, probably by many orders of magnitude in the sense that the same hardware could achieve the same things with vastly less computing requirement.
This is an area where intelligence only slightly beyond the best human capability might enable enormous yet silent advances in capability with very few physical constraints on how fast the transition can proceed. Regarding improvements, it’s not even meaningful to talk about “orders of magnitude”. A new type of design might achieve things that were impossible with the previous structure no matter how much extra compute we threw at it.
It also doesn’t have to be self-improvement. That’s just one of the stories that’s easier to explain. A narrowly superintelligent tool AI that devises for the humans a better way to design AIs could end up just as disastrous. Likewise a weak agent-like superintelligence that doesn’t have self-preservation as major goal, but is fine with designing a strong ASI that will supersede it.
Once ASI is well past the capability of any human, what it can do is by definition not knowable. For an agent-like system with instrumental self-preservation, removing its dependence upon existing hardware seems very likely. There are many paths that even humans can devise that would achieve that. This step seems probably slower, but again this isn’t knowable to us.
Creating more and better hardware also seems obvious, as we almost certainly have not designed the best possible hardware either. What form the better hardware takes is also not knowable, but there are lots of candidates that we know about and certainly others that we don’t. We do know that even with existing types of computation hardware we are nowhere near physical limits to total computing capability, just economic ones. An extra ten orders of magnitude in computing capability seems like a reasonable lower bound on what could be achieved.
Any or all of the above. I do expect better software to be the first, and probably most important step. There are lots of possible scenarios, but the most dangerous seem to be those where AI can greatly improve upon software to make much better use of existing hardware.
We are almost certainly not using anywhere near the best possible algorithms to turn computing power into intelligent behaviour, probably by many orders of magnitude in the sense that the same hardware could achieve the same things with vastly less computing requirement.
This is an area where intelligence only slightly beyond the best human capability might enable enormous yet silent advances in capability with very few physical constraints on how fast the transition can proceed. Regarding improvements, it’s not even meaningful to talk about “orders of magnitude”. A new type of design might achieve things that were impossible with the previous structure no matter how much extra compute we threw at it.
It also doesn’t have to be self-improvement. That’s just one of the stories that’s easier to explain. A narrowly superintelligent tool AI that devises for the humans a better way to design AIs could end up just as disastrous. Likewise a weak agent-like superintelligence that doesn’t have self-preservation as major goal, but is fine with designing a strong ASI that will supersede it.
Once ASI is well past the capability of any human, what it can do is by definition not knowable. For an agent-like system with instrumental self-preservation, removing its dependence upon existing hardware seems very likely. There are many paths that even humans can devise that would achieve that. This step seems probably slower, but again this isn’t knowable to us.
Creating more and better hardware also seems obvious, as we almost certainly have not designed the best possible hardware either. What form the better hardware takes is also not knowable, but there are lots of candidates that we know about and certainly others that we don’t. We do know that even with existing types of computation hardware we are nowhere near physical limits to total computing capability, just economic ones. An extra ten orders of magnitude in computing capability seems like a reasonable lower bound on what could be achieved.