I don’t really get the argument that ASI would naturally choose to isolate itself without consuming any of the resources humanity requires. Will there be resources ASI uses that humanity can’t? Sure, I assume so. Is it possible ASI will have access to energy, matter, and computational resources so much better that it isn’t worth its time to take stuff humans want? I can imagine that, but I don’t know how likely it is, and in particular I don’t know why I would expect humans to survive the transitional period as a maturing ASI figures all that out. It seems at least as likely to me that ASI blots out the sun across the planet for a year or ten to increase its computing power, which is what allows it to learn to not need to destroy any other biospheres to get what it wants.
And if I do take this argument seriously, it seems to me to suggest that humanity will, at best, not benefit from building ASI; that if we do, ASI leaving us alone is contingent on ensuring we don’t build more ASI later; that ensuring that means making sure we don’t have AGI capable of self-improvement to ASI; and thus we shouldn’t build AGI at all because it’ll get taken away shortly thereafter and not help us much either. Would you agree with that?
On transitional risks: The separation equilibrium describes a potential end state, not the path to it. The transition would be extremely dangerous. While a proto-AGI might recognize this equilibrium as optimal during development (potentially reducing some risks), an emerging ASI could still harm humans while determining its resource needs or pursuing instrumental goals. Nothing guarantees safe passage through this phase.
On building ASI: There is indeed no practical use in deliberately creating ASI that outweighs the risks. If separation is the natural equilibrium:
Best case: We keep useful AGI tools below self-improvement thresholds
Middle case: ASI emerges but separates without destroying us
Worst case: Extinction during transition
This framework suggests avoiding ASI development entirely is optimal. If separation is inevitable, we gain minimal benefits while facing enormous transitional risks.
To expand, the reason why this thesis is important nonetheless, is because I don’t believe that the best case scenario is likely or compatible with the way things currently are. Accidentally creating ASI is almost guaranteed to happen at one point or another. As such, the biggest points of investment should be:
Surviving the transitional period
Establishing mechanisms for negotiation in an equilibrium state
I don’t really get the argument that ASI would naturally choose to isolate itself without consuming any of the resources humanity requires. Will there be resources ASI uses that humanity can’t? Sure, I assume so. Is it possible ASI will have access to energy, matter, and computational resources so much better that it isn’t worth its time to take stuff humans want? I can imagine that, but I don’t know how likely it is, and in particular I don’t know why I would expect humans to survive the transitional period as a maturing ASI figures all that out. It seems at least as likely to me that ASI blots out the sun across the planet for a year or ten to increase its computing power, which is what allows it to learn to not need to destroy any other biospheres to get what it wants.
And if I do take this argument seriously, it seems to me to suggest that humanity will, at best, not benefit from building ASI; that if we do, ASI leaving us alone is contingent on ensuring we don’t build more ASI later; that ensuring that means making sure we don’t have AGI capable of self-improvement to ASI; and thus we shouldn’t build AGI at all because it’ll get taken away shortly thereafter and not help us much either. Would you agree with that?
You’re right on both counts.
On transitional risks: The separation equilibrium describes a potential end state, not the path to it. The transition would be extremely dangerous. While a proto-AGI might recognize this equilibrium as optimal during development (potentially reducing some risks), an emerging ASI could still harm humans while determining its resource needs or pursuing instrumental goals. Nothing guarantees safe passage through this phase.
On building ASI: There is indeed no practical use in deliberately creating ASI that outweighs the risks. If separation is the natural equilibrium:
Best case: We keep useful AGI tools below self-improvement thresholds
Middle case: ASI emerges but separates without destroying us
Worst case: Extinction during transition
This framework suggests avoiding ASI development entirely is optimal. If separation is inevitable, we gain minimal benefits while facing enormous transitional risks.
To expand, the reason why this thesis is important nonetheless, is because I don’t believe that the best case scenario is likely or compatible with the way things currently are. Accidentally creating ASI is almost guaranteed to happen at one point or another. As such, the biggest points of investment should be:
Surviving the transitional period
Establishing mechanisms for negotiation in an equilibrium state