It has only been about one human generation since human cloning became technologically feasible. The fact that we have not developed it after only one generation tells us relatively little about humanity’s ability to resist technologies that provide immediate and large competitive advantages.
Human cloning enables essentially millions of Grothendiecks and von Neumanns, which is likely an immense advantage. Delaying ASI by one human generation (for a start) might actually be a very useful development. So this snippet probably doesn’t intend to be giving an example analogous to ASI.
From later in the post:
The upside of automating all jobs in the economy will likely far exceed the costs, making it desirable to accelerate, rather than delay, the inevitable.
The point of delaying ASI is that it might allow humanity to change crucial details of the outcome of its development. Even with the premise of ASI being somehow inevitable, it leads to different consequences depending on how it’s developed, which plausibly depends on when exactly it’s developed, even if it’s only a single human generation later than otherwise. So the relevant costs aren’t costs of developing ASI, but relative costs of developing it earlier, when we know less about how to do that correctly, compared to developing it later.
But if “automating all jobs in the economy” is just a mundane technology that only threatens the current structure of society where most people have jobs (and so most of the costs are about the resulting societal upheaval), this snippet makes more sense. If the AI economy remains under humanity’s control, there is much less path dependence to how introduction of this technology determines the outcome, and so it matters less for the eventual outcome if this happens soon vs. later.
I think they centrally don’t treat “automating all jobs in the economy” as ASI precursor, or threatening human extinction or permanent disempowerment.
From the post:
Human cloning enables essentially millions of Grothendiecks and von Neumanns, which is likely an immense advantage. Delaying ASI by one human generation (for a start) might actually be a very useful development. So this snippet probably doesn’t intend to be giving an example analogous to ASI.
From later in the post:
The point of delaying ASI is that it might allow humanity to change crucial details of the outcome of its development. Even with the premise of ASI being somehow inevitable, it leads to different consequences depending on how it’s developed, which plausibly depends on when exactly it’s developed, even if it’s only a single human generation later than otherwise. So the relevant costs aren’t costs of developing ASI, but relative costs of developing it earlier, when we know less about how to do that correctly, compared to developing it later.
But if “automating all jobs in the economy” is just a mundane technology that only threatens the current structure of society where most people have jobs (and so most of the costs are about the resulting societal upheaval), this snippet makes more sense. If the AI economy remains under humanity’s control, there is much less path dependence to how introduction of this technology determines the outcome, and so it matters less for the eventual outcome if this happens soon vs. later.