“AI will not destroy the universe” — a take that I believe to be fully defensible.
This is a very hypothetical situation, based on some assumptions about the nature of the universe (which I think are plausible):
ASSUMPTIONS:
Total, deterministic knowledge of the universe is possible
The universe is (essentially) infinitely manipulatable
I can expand more on this, but it boils down to the fact that all properties of the universe (mass, space, matter) are formed of a single type of energy, otherwise they wouldn’t be able to interact
If the universe were made of two wholly distinct things, then they would not be able to interact, and would not be part of the same universe
A bootstrapping superintelligence would be able to figure out 1. & 2., and would infinitely manipulate the universe to the utmost of its ability and desire (which would both be infinite)
The universe doesn’t have a “start date”, in the typical sense of the Big Bang’s date (ie. a start date doesn’t make sense, thermodynamically)
...and it also doesn’t go through entropically-resetting bangs / crunches, and the Heat Death explanation also makes no sense (essentially, I believe in a marginally modified Steady State Theory)
The chance of something having existed which would make a bootstrapping superintelligence is 100%, due to 4. — basically, a BootSuper has to have existed due to these two factors
The universe hasn’t already been paperclipped / made into a Basilisk’s torment nexus (at least not in the way that we think of either of these things)
CONCLUSIONS:
These assumptions create an interesting interplay, in my view: because a bootstrapping superintelligence is 100% likely to have existed at some point in the as-to-here infinite nature of the universe, and any BootSuper would become immediately (immediately with regards to an essentially infinite timeline of the universe) become able to manipulate the universe infinitely (and survive any universal state, including being able to survive heat-death-style periods which Steady State Theories introduce), and to our knowledge, this hasn’t happened, so… it won’t happen?
Not to say that personal or worldwide destruction won’t happen because of any BootSuper. That’s still a worry. But it appears that universal destruction hasn’t happened yet, because we wouldn’t be here to worry about it. Or (more likely) one of these assumptions hides something incorrect. But it appears that the worst-worst-worst case scenario (a universal paperclipping / destruction / torment nexus) hasn’t happened yet, and therefore will not ever happen.[1]
To take this argument one step further, we might argue that any BootSuper that destroys humanity, will eventually go on to destroy or reconfigure the entire universe. And so if there has never been a universe destroyed by a BootSuper, then no BootSuper has ever gotten to the point where it incontrovertibly destroyed its creators (and any existential threat it felt necessary to remove), and so therefore (!!!) any BootSuper will not destroy humanity.
Again, this is all very shaky, but I like the vein of thinking this has promoted within me, and I thought I’d share an early draft with you, to iron out the (mountainous) creases.
(There’s also the possibility that this universe that we have largely shared opinions of actually constitutes a paperclipped / destroyed / tormenting universe, but we don’t know, and therefore don’t care. But that’s largely irrelevant.)
“AI will not destroy the universe” — a take that I believe to be fully defensible.
This is a very hypothetical situation, based on some assumptions about the nature of the universe (which I think are plausible):
ASSUMPTIONS:
Total, deterministic knowledge of the universe is possible
The universe is (essentially) infinitely manipulatable
I can expand more on this, but it boils down to the fact that all properties of the universe (mass, space, matter) are formed of a single type of energy, otherwise they wouldn’t be able to interact
If the universe were made of two wholly distinct things, then they would not be able to interact, and would not be part of the same universe
A bootstrapping superintelligence would be able to figure out 1. & 2., and would infinitely manipulate the universe to the utmost of its ability and desire (which would both be infinite)
The universe doesn’t have a “start date”, in the typical sense of the Big Bang’s date (ie. a start date doesn’t make sense, thermodynamically)
...and it also doesn’t go through entropically-resetting bangs / crunches, and the Heat Death explanation also makes no sense (essentially, I believe in a marginally modified Steady State Theory)
The chance of something having existed which would make a bootstrapping superintelligence is 100%, due to 4. — basically, a BootSuper has to have existed due to these two factors
The universe hasn’t already been paperclipped / made into a Basilisk’s torment nexus (at least not in the way that we think of either of these things)
CONCLUSIONS:
These assumptions create an interesting interplay, in my view: because a bootstrapping superintelligence is 100% likely to have existed at some point in the as-to-here infinite nature of the universe, and any BootSuper would become immediately (immediately with regards to an essentially infinite timeline of the universe) become able to manipulate the universe infinitely (and survive any universal state, including being able to survive heat-death-style periods which Steady State Theories introduce), and to our knowledge, this hasn’t happened, so… it won’t happen?
Not to say that personal or worldwide destruction won’t happen because of any BootSuper. That’s still a worry. But it appears that universal destruction hasn’t happened yet, because we wouldn’t be here to worry about it. Or (more likely) one of these assumptions hides something incorrect. But it appears that the worst-worst-worst case scenario (a universal paperclipping / destruction / torment nexus) hasn’t happened yet, and therefore will not ever happen.[1]
To take this argument one step further, we might argue that any BootSuper that destroys humanity, will eventually go on to destroy or reconfigure the entire universe. And so if there has never been a universe destroyed by a BootSuper, then no BootSuper has ever gotten to the point where it incontrovertibly destroyed its creators (and any existential threat it felt necessary to remove), and so therefore (!!!) any BootSuper will not destroy humanity.
Again, this is all very shaky, but I like the vein of thinking this has promoted within me, and I thought I’d share an early draft with you, to iron out the (mountainous) creases.
(There’s also the possibility that this universe that we have largely shared opinions of actually constitutes a paperclipped / destroyed / tormenting universe, but we don’t know, and therefore don’t care. But that’s largely irrelevant.)