What mechanism do you see for ASI being likely to destroy itself? Intuitively, I’d expect an ASI to be able to avoid any suicide pact technologies/actions, both because it’ll have better judgment than humans and because it (presumably) won’t face the competitive pressures between groups of humans that incentivize rushing risky technologies. Unless I’m missing such a mechanism, it strikes me that, if ASI happens, it probably colonizes the galaxy and beyond. (Hence never reaching ASI being a way to resolve the Fermi Paradox.)
The most likely one is, I think, via research breakthroughs in fundamental physics. For example, I would expect that true understanding of quantum gravity would open a way to much more powerful weapons of mass destruction (on the level of temporary modifying properties of significant volumes of space-time to destroy structure those volumes contain) and also to experiments with unexpected large-scale side effects (think discussion before Trinity test about a possibility of “igniting the atmosphere”, but this time for real).
For this particular branch of existential risk scenarios I would expect and hope that smarter than human entities would be able to create social structure capable to avoid warfare of this kind (although who knows with certainty), but not stumbling into dangerous experiments by accident might depend on the unknown details of our actual physics and cosmology.
There are other possibilities. For example, I crave achieving a working solution for the “hard problem of consciousness”, a solution with experimentally observable empirical consequences, but when I ponder implications of capabilities this is likely to give I feel quite uneasy…
We have this temporary period of “relative stagnation in most fundamental parts of science”. A drastic increase in available levels of intelligence is one way to end this stagnation. We crave that, definitively, and I certainly crave the end of this stagnation, but that’s where some very fundamental sources of existential risks lie…
What mechanism do you see for ASI being likely to destroy itself? Intuitively, I’d expect an ASI to be able to avoid any suicide pact technologies/actions, both because it’ll have better judgment than humans and because it (presumably) won’t face the competitive pressures between groups of humans that incentivize rushing risky technologies. Unless I’m missing such a mechanism, it strikes me that, if ASI happens, it probably colonizes the galaxy and beyond. (Hence never reaching ASI being a way to resolve the Fermi Paradox.)
The most likely one is, I think, via research breakthroughs in fundamental physics. For example, I would expect that true understanding of quantum gravity would open a way to much more powerful weapons of mass destruction (on the level of temporary modifying properties of significant volumes of space-time to destroy structure those volumes contain) and also to experiments with unexpected large-scale side effects (think discussion before Trinity test about a possibility of “igniting the atmosphere”, but this time for real).
For this particular branch of existential risk scenarios I would expect and hope that smarter than human entities would be able to create social structure capable to avoid warfare of this kind (although who knows with certainty), but not stumbling into dangerous experiments by accident might depend on the unknown details of our actual physics and cosmology.
There are other possibilities. For example, I crave achieving a working solution for the “hard problem of consciousness”, a solution with experimentally observable empirical consequences, but when I ponder implications of capabilities this is likely to give I feel quite uneasy…
We have this temporary period of “relative stagnation in most fundamental parts of science”. A drastic increase in available levels of intelligence is one way to end this stagnation. We crave that, definitively, and I certainly crave the end of this stagnation, but that’s where some very fundamental sources of existential risks lie…