I think you make a number of good points, and just being able to move more energy and matter probably does not make us safer.
I do think a future humanity, having successfully solved such problems, would have a better chance of also successfully coordinating to not build AGI prematurely, and also less pressing need for the economic growth from racing ahead to do so.
Yes, solving aging before ASI would dramatically reduce the urgency that many people feel for racing to ASI. But the likely increase in anthropogenic X risks would dwarf the fact that we could prevent natural X risks.
I think if the ASI(s) arose in a datacenter in orbit, there are some scenarios that it could be beneficial, like if there were AI-AI conflict. I think regardless it will pretty quickly become not dependent on humans for survival.
I think Paul Christiano’s argument of continuous progress being safer is that society will be significantly less vulnerable to ASI because it will have built up defenses ahead of time. I think that makes sense if it is continuous all the way. But my intuition is that even if we get some continuous development (e.g. a few orders of magnitude of AI-driven economic growth), that probably means more sophisticated AI defenses, and that would give us a little bit more protection against ASI.
I think you make a number of good points, and just being able to move more energy and matter probably does not make us safer.
Yes, solving aging before ASI would dramatically reduce the urgency that many people feel for racing to ASI. But the likely increase in anthropogenic X risks would dwarf the fact that we could prevent natural X risks.
I think if the ASI(s) arose in a datacenter in orbit, there are some scenarios that it could be beneficial, like if there were AI-AI conflict. I think regardless it will pretty quickly become not dependent on humans for survival.
I think Paul Christiano’s argument of continuous progress being safer is that society will be significantly less vulnerable to ASI because it will have built up defenses ahead of time. I think that makes sense if it is continuous all the way. But my intuition is that even if we get some continuous development (e.g. a few orders of magnitude of AI-driven economic growth), that probably means more sophisticated AI defenses, and that would give us a little bit more protection against ASI.