Suppose you could build an AGI in 1999 or 2009, but the AGI required a specialized, expensive supercomputer to run, and there was only 1-2 of such supercomputers in the world. Also suppose (for the sake of argument) that the AGI couldn’t create a botnet of itself using PCs or conventional servers, or that creating such a botnet would not significantly improve the AGI’s abilities (<2x improvement). Would that be a better outcome than an AGI that arrives in 2029 and can run on dozens or billions of machines which exist at that time?
Not having a hardware overhang makes your planet much safer. But it depends on how quickly researchers would develop methods for scaling AGI systems, either by building more supercomputers, or generalizing our code to run on more conventional machines. If this process takes years or decades we get to experiment with AGI in a relatively safe way. But if this step takes months, then I think the world ends in ~ 2000 or ~ 2010 (depending on our AGI arrival date).
Suppose you could build an AGI in 1999 or 2009, but the AGI required a specialized, expensive supercomputer to run, and there was only 1-2 of such supercomputers in the world. Also suppose (for the sake of argument) that the AGI couldn’t create a botnet of itself using PCs or conventional servers, or that creating such a botnet would not significantly improve the AGI’s abilities (<2x improvement). Would that be a better outcome than an AGI that arrives in 2029 and can run on dozens or billions of machines which exist at that time?
Maybe?
Not having a hardware overhang makes your planet much safer. But it depends on how quickly researchers would develop methods for scaling AGI systems, either by building more supercomputers, or generalizing our code to run on more conventional machines. If this process takes years or decades we get to experiment with AGI in a relatively safe way. But if this step takes months, then I think the world ends in ~ 2000 or ~ 2010 (depending on our AGI arrival date).