Graphing AI economic growth rates, or time to Dyson Swarm
BAU GWP is business as usual gross world product (the global equivalent of GDP).
Acknowledgements: Thanks to Robin Hanson, Anders Sandberg, and others for input on the lines. Errors are my own.
I graphed out a rough approximation[1] of a few leading AI figures’ growth rates to aid comparison (dotted means I wasn’t able to get endorsement). The legend is ordered by peak growth rates. I was interested by how many have mentioned approximately monthly doubling rates. At first I was thinking that monthly doubling rates would have about 12 times the annual growth rate of doubling yearly, but it’s actually about 4000 times the annual growth rate. With a few assumptions,[2] I estimated that a Dyson Swarm[3] corresponded to about 10^19 larger economy than now. The self replicating nanotechnology scenario could have a doubling time of only a day or less, but I think it would be difficult to do a full Dyson Swarm at that rate, so I just used one week doubling time, or about 15 months to Dyson Swarm. One year doubling time roughly corresponds to a factory making its weight in equipment per year (clanking replicators), the current energy payback time of solar panels, and the old Moore’s Law. I’ve also put some lines on from economists based on consultations with authors of this and this.
The relevance to AI safety is that I think there is some (negative) correlation between the safety and the rate of change (or the rate of change in the rate of change (jerk)?). Interestingly people tend to think The Age of Em would be safer even though its economic growth and especially jerk are high, but that’s because ems are human emulations. I am also interested in people’s opinions of how much safety we would get by going up one of these curves for a little while before getting truly explosive growth (e.g. superintelligence—some discussion is here).
- ^
Generally median, though Hanson has significant probability mass on a population and economic collapse, so this is his median scenario if we get ems.
- ^
Digital mind speedup of 1 million, power requirement of human to digital mind of 100, ignoring economic growth that could occur without energy consumption increase, ignoring Baumol’s Cost Disease
- ^
Even diamond would not be nearly strong enough to support a solid Dyson Sphere, so it would likely be independent orbiting satellites in a swarm
- 8 Jul 2025 1:58 UTC; 7 points) 's comment on The Industrial Explosion by (
The title and opening paragraph are interesting in their own right, but the AI Safety comment seems like a category error to me. Economic disruption can be a source of major risk, but is not necessary for it.
There are (OOM) 1e10 humans weighing 1e9 tons, and killing us all could take just some well-designed and well-dispersed bacteria weighing on the order of grams. Control over large amounts of matter is not needed if the control is sufficiently precise and directed at a sufficiently narrow goal. (I know my phrasing refers to an actively malicious goal, but that’s just an easy one to describe, malice is not actually required for the general point).
As for ems—safer is not safe in absolute terms. If you give human minds great power, then even if they retain prosocial values, safety depends on them only taking actions whose near- and long-term consequences they understand. Our track record on that as a species is not so great.
To the last question on economic growth enabling safety—I would also say “By default, none” and “category error.” Economic growth in the abstract is IMO like buying optionality—we collectively have more spare resources to throw at whatever goals. But what kinds of options get opened up, and for whom, depends on the nature and source of the growth. Imagine we snapped our fingers and made a Dyson swarm instantly. What would we do with all the energy? Imagine we also had starlifting and cheap transmutation of elements. What would we do with the matter? Actually using it would (at least presently) involve constantly firing TW-scale energy beams and megaton-scale lumps of metal at our own home planet. Unclear effects on safety at best, highly contingent on us solving some very difficult collective decision making, coordination, and control problems. I do think a future humanity, having successfully solved such problems, would have a better chance of also successfully coordinating to not build AGI prematurely, and also less pressing need for the economic growth from racing ahead to do so. But on the other hand, we’re talking then about a world where matter and energy are super abundant, which means value creation is pretty much limited only by labor and knowledge—the things hardest to automate/accelerate without AI. And the ability to build the compute needed to achieve AGI is no longer gated behind the ability to raise huge amounts of money—it’s trivial for such a world to build even a yottawatt-scale data center in orbit and bypass a lot of our practical need for thinking carefully about what experiments to run and who can run them. And if they succeeded, the resulting AI would start out already not bound to our planet. Which, on the one hand, means less need for harvesting our local resources to bootstrap, but, on the other hand, also means no dependence on our continued survival.
I think you make a number of good points, and just being able to move more energy and matter probably does not make us safer.
Yes, solving aging before ASI would dramatically reduce the urgency that many people feel for racing to ASI. But the likely increase in anthropogenic X risks would dwarf the fact that we could prevent natural X risks.
I think if the ASI(s) arose in a datacenter in orbit, there are some scenarios that it could be beneficial, like if there were AI-AI conflict. I think regardless it will pretty quickly become not dependent on humans for survival.
I think Paul Christiano’s argument of continuous progress being safer is that society will be significantly less vulnerable to ASI because it will have built up defenses ahead of time. I think that makes sense if it is continuous all the way. But my intuition is that even if we get some continuous development (e.g. a few orders of magnitude of AI-driven economic growth), that probably means more sophisticated AI defenses, and that would give us a little bit more protection against ASI.