The title and opening paragraph are interesting in their own right, but the AI Safety comment seems like a category error to me. Economic disruption can be a source of major risk, but is not necessary for it.
There are (OOM) 1e10 humans weighing 1e9 tons, and killing us all could take just some well-designed and well-dispersed bacteria weighing on the order of grams. Control over large amounts of matter is not needed if the control is sufficiently precise and directed at a sufficiently narrow goal. (I know my phrasing refers to an actively malicious goal, but that’s just an easy one to describe, malice is not actually required for the general point).
As for ems—safer is not safe in absolute terms. If you give human minds great power, then even if they retain prosocial values, safety depends on them only taking actions whose near- and long-term consequences they understand. Our track record on that as a species is not so great.
To the last question on economic growth enabling safety—I would also say “By default, none” and “category error.” Economic growth in the abstract is IMO like buying optionality—we collectively have more spare resources to throw at whatever goals. But what kinds of options get opened up, and for whom, depends on the nature and source of the growth. Imagine we snapped our fingers and made a Dyson swarm instantly. What would we do with all the energy? Imagine we also had starlifting and cheap transmutation of elements. What would we do with the matter? Actually using it would (at least presently) involve constantly firing TW-scale energy beams and megaton-scale lumps of metal at our own home planet. Unclear effects on safety at best, highly contingent on us solving some very difficult collective decision making, coordination, and control problems. I do think a future humanity, having successfully solved such problems, would have a better chance of also successfully coordinating to not build AGI prematurely, and also less pressing need for the economic growth from racing ahead to do so. But on the other hand, we’re talking then about a world where matter and energy are super abundant, which means value creation is pretty much limited only by labor and knowledge—the things hardest to automate/accelerate without AI. And the ability to build the compute needed to achieve AGI is no longer gated behind the ability to raise huge amounts of money—it’s trivial for such a world to build even a yottawatt-scale data center in orbit and bypass a lot of our practical need for thinking carefully about what experiments to run and who can run them. And if they succeeded, the resulting AI would start out already not bound to our planet. Which, on the one hand, means less need for harvesting our local resources to bootstrap, but, on the other hand, also means no dependence on our continued survival.
I think you make a number of good points, and just being able to move more energy and matter probably does not make us safer.
I do think a future humanity, having successfully solved such problems, would have a better chance of also successfully coordinating to not build AGI prematurely, and also less pressing need for the economic growth from racing ahead to do so.
Yes, solving aging before ASI would dramatically reduce the urgency that many people feel for racing to ASI. But the likely increase in anthropogenic X risks would dwarf the fact that we could prevent natural X risks.
I think if the ASI(s) arose in a datacenter in orbit, there are some scenarios that it could be beneficial, like if there were AI-AI conflict. I think regardless it will pretty quickly become not dependent on humans for survival.
I think Paul Christiano’s argument of continuous progress being safer is that society will be significantly less vulnerable to ASI because it will have built up defenses ahead of time. I think that makes sense if it is continuous all the way. But my intuition is that even if we get some continuous development (e.g. a few orders of magnitude of AI-driven economic growth), that probably means more sophisticated AI defenses, and that would give us a little bit more protection against ASI.
The title and opening paragraph are interesting in their own right, but the AI Safety comment seems like a category error to me. Economic disruption can be a source of major risk, but is not necessary for it.
There are (OOM) 1e10 humans weighing 1e9 tons, and killing us all could take just some well-designed and well-dispersed bacteria weighing on the order of grams. Control over large amounts of matter is not needed if the control is sufficiently precise and directed at a sufficiently narrow goal. (I know my phrasing refers to an actively malicious goal, but that’s just an easy one to describe, malice is not actually required for the general point).
As for ems—safer is not safe in absolute terms. If you give human minds great power, then even if they retain prosocial values, safety depends on them only taking actions whose near- and long-term consequences they understand. Our track record on that as a species is not so great.
To the last question on economic growth enabling safety—I would also say “By default, none” and “category error.” Economic growth in the abstract is IMO like buying optionality—we collectively have more spare resources to throw at whatever goals. But what kinds of options get opened up, and for whom, depends on the nature and source of the growth. Imagine we snapped our fingers and made a Dyson swarm instantly. What would we do with all the energy? Imagine we also had starlifting and cheap transmutation of elements. What would we do with the matter? Actually using it would (at least presently) involve constantly firing TW-scale energy beams and megaton-scale lumps of metal at our own home planet. Unclear effects on safety at best, highly contingent on us solving some very difficult collective decision making, coordination, and control problems. I do think a future humanity, having successfully solved such problems, would have a better chance of also successfully coordinating to not build AGI prematurely, and also less pressing need for the economic growth from racing ahead to do so. But on the other hand, we’re talking then about a world where matter and energy are super abundant, which means value creation is pretty much limited only by labor and knowledge—the things hardest to automate/accelerate without AI. And the ability to build the compute needed to achieve AGI is no longer gated behind the ability to raise huge amounts of money—it’s trivial for such a world to build even a yottawatt-scale data center in orbit and bypass a lot of our practical need for thinking carefully about what experiments to run and who can run them. And if they succeeded, the resulting AI would start out already not bound to our planet. Which, on the one hand, means less need for harvesting our local resources to bootstrap, but, on the other hand, also means no dependence on our continued survival.
I think you make a number of good points, and just being able to move more energy and matter probably does not make us safer.
Yes, solving aging before ASI would dramatically reduce the urgency that many people feel for racing to ASI. But the likely increase in anthropogenic X risks would dwarf the fact that we could prevent natural X risks.
I think if the ASI(s) arose in a datacenter in orbit, there are some scenarios that it could be beneficial, like if there were AI-AI conflict. I think regardless it will pretty quickly become not dependent on humans for survival.
I think Paul Christiano’s argument of continuous progress being safer is that society will be significantly less vulnerable to ASI because it will have built up defenses ahead of time. I think that makes sense if it is continuous all the way. But my intuition is that even if we get some continuous development (e.g. a few orders of magnitude of AI-driven economic growth), that probably means more sophisticated AI defenses, and that would give us a little bit more protection against ASI.