I think the main issue with this frame is the AI advantages over biological humans, in particular in speed of thought. If AGIs think 20-100 times faster than humans, they can do at least 20-100 years’ worth of research in a single year (given the agency to do so). With algorithmic improvement made in that time, this can likely become at least a 1000 times speedup. With specialized hardware, which they might develop within the next year that is for them 1000 years long, they could become 10,000 or 1,000,000 times faster, and this hardware might be feasible to manufacture immediately, using macroscopic biotech (or diamondoid nanotech if that turns out to be a real option, but it’s not necessary). Now we have epochs’ worth of research in the next year.
Crucially, this doesn’t require any assumptions about feasiblity of superintelligence, just the observation that human brains are glacially slow. All it takes is a single lapse in monopoly on agency for a couple of years, anywhere in the world. And even that is unnecessary in a world that remains under human control, but where faster AI hardware is developed before faster human substrate.
So even in a world where fast takeoff or smarter-than-human AGI are disallowed by laws of nature, there is still a window of merely 1-4 years from the first AGI (that is capable of making unbounded research progress on its own) to an AI civilization that is a 1,000,000 years old.
For humans to maintain a monopoly on agency, no such lapse in AGI agency must be allowed before humans can comparably speed themselves up. Slow takeoff is only slow while AIs are incapable of autonomous unbounded research (even if it’s merely because they lack agency and not intelligence), or after some humans have moved to a substrate with a similar speed (though this still only helps in absence of strong superintelligence whose advantage is not speed alone).
Mostly agree but point out that assessing size and speed of scientific progress one should compare AGI versus all of humanity not only individual humans.
The speedup I’m talking about is serial, and for purposes of scientific progress maybe only about 100,000-1,000,000 humans are relevant, possibly just 10,000 would do if all researchers are von Neumann level. This maps to hardware for inference of that many instances of AGI in parallel, which seems quite feasible if an AGI instance doesn’t need much more than an LLM. Learning doesn’t need to worry about latency, so that’s a weaker constraint than inference. (It’s an exploratory engineering sketch, so everything here is a plausible lower bound, not a prediction.)
The main bottleneck is capability of existing fabs, but if it’s overcome, manufacturing scale falls to the same method. Initial speed advantage should allow AGIs to figure out how to overcome it very quickly, possibly with the intermediate step of designing better chips for existing fabs to improve the advantage. Traditionally, the proposed method for overcoming the hardware/industry bottleneck is nanotech, but if it’s not feasible there is also macroscopic biotech, designing animal-like objects that grow exponentially as quickly as fruit flies and serve as non-precision parts of factories and as chemical plants, obviating the need to scale infrastructure to manufacture things like robot arms or buildings. Then, it’s a question of making use of this to produce compute and fusion, which is the step that could take up most of the physical time.
I think the main issue with this frame is the AI advantages over biological humans, in particular in speed of thought. If AGIs think 20-100 times faster than humans, they can do at least 20-100 years’ worth of research in a single year (given the agency to do so). With algorithmic improvement made in that time, this can likely become at least a 1000 times speedup. With specialized hardware, which they might develop within the next year that is for them 1000 years long, they could become 10,000 or 1,000,000 times faster, and this hardware might be feasible to manufacture immediately, using macroscopic biotech (or diamondoid nanotech if that turns out to be a real option, but it’s not necessary). Now we have epochs’ worth of research in the next year.
Crucially, this doesn’t require any assumptions about feasiblity of superintelligence, just the observation that human brains are glacially slow. All it takes is a single lapse in monopoly on agency for a couple of years, anywhere in the world. And even that is unnecessary in a world that remains under human control, but where faster AI hardware is developed before faster human substrate.
So even in a world where fast takeoff or smarter-than-human AGI are disallowed by laws of nature, there is still a window of merely 1-4 years from the first AGI (that is capable of making unbounded research progress on its own) to an AI civilization that is a 1,000,000 years old.
For humans to maintain a monopoly on agency, no such lapse in AGI agency must be allowed before humans can comparably speed themselves up. Slow takeoff is only slow while AIs are incapable of autonomous unbounded research (even if it’s merely because they lack agency and not intelligence), or after some humans have moved to a substrate with a similar speed (though this still only helps in absence of strong superintelligence whose advantage is not speed alone).
Mostly agree but point out that assessing size and speed of scientific progress one should compare AGI versus all of humanity not only individual humans.
The speedup I’m talking about is serial, and for purposes of scientific progress maybe only about 100,000-1,000,000 humans are relevant, possibly just 10,000 would do if all researchers are von Neumann level. This maps to hardware for inference of that many instances of AGI in parallel, which seems quite feasible if an AGI instance doesn’t need much more than an LLM. Learning doesn’t need to worry about latency, so that’s a weaker constraint than inference. (It’s an exploratory engineering sketch, so everything here is a plausible lower bound, not a prediction.)
The main bottleneck is capability of existing fabs, but if it’s overcome, manufacturing scale falls to the same method. Initial speed advantage should allow AGIs to figure out how to overcome it very quickly, possibly with the intermediate step of designing better chips for existing fabs to improve the advantage. Traditionally, the proposed method for overcoming the hardware/industry bottleneck is nanotech, but if it’s not feasible there is also macroscopic biotech, designing animal-like objects that grow exponentially as quickly as fruit flies and serve as non-precision parts of factories and as chemical plants, obviating the need to scale infrastructure to manufacture things like robot arms or buildings. Then, it’s a question of making use of this to produce compute and fusion, which is the step that could take up most of the physical time.