Not developing nanotech is like not advancing semiconductor fabs, a compute governance intervention. If ASI actually is dangerous and too hard to manage in the foreseeable future, and many reasonable people notice this, then early AGIs will start noticing it too, and seek to prevent an overly fast takeoff.
If there is no software-only singularity and much more compute really is necessary for ASI, not developing nanotech sounds like a useful thing to do. Gradual disempowerment dynamics might make the world largely run by AGIs more coordinated, so that technological determinism will lose a lot of its power, and the things that actually happen will be decided rather than follow inevitably from what’s feasible. It’s not enough to ask what’s technologically feasible at that point.
Not developing nanotech is like not advancing semiconductor fabs, a compute governance intervention. If ASI actually is dangerous and too hard to manage in the foreseeable future, and many reasonable people notice this, then early AGIs will start noticing it too, and seek to prevent an overly fast takeoff.
If there is no software-only singularity and much more compute really is necessary for ASI, not developing nanotech sounds like a useful thing to do. Gradual disempowerment dynamics might make the world largely run by AGIs more coordinated, so that technological determinism will lose a lot of its power, and the things that actually happen will be decided rather than follow inevitably from what’s feasible. It’s not enough to ask what’s technologically feasible at that point.