I know you’re not arguing here that using our atoms for something else is the only (or most likely?) reason for a superintelligence to harm us. But just in case some reader gets this impression, here’s another reason:
If the utility of the superintelligence is not perfectly aligned with “our utility”, then at some point we’ll probably want to switch the superintelligence off. So from the perspective of the superintelligence, the current configuration of our atoms might be very net negative. Suppose the superintelligence is boxed and can only affect us by sending us, say, 1 kb of text. It might be the case that killing us all is the only way for 1 kb of text to reliably stop us from switching the superintelligence off.
Yes, true. I estimated that there are many scenarious where AI may kill us, like 50. I posted them here: http://lesswrong.com/lw/mgf/amap agifailures modesand levels/
We, with D.Denkenberger, wrote an article with the full list of the ways how AI catastrophe could happen and it is under review now. Kaj Sotala has another classification of such catastrophic types.
I know you’re not arguing here that using our atoms for something else is the only (or most likely?) reason for a superintelligence to harm us. But just in case some reader gets this impression, here’s another reason:
If the utility of the superintelligence is not perfectly aligned with “our utility”, then at some point we’ll probably want to switch the superintelligence off. So from the perspective of the superintelligence, the current configuration of our atoms might be very net negative. Suppose the superintelligence is boxed and can only affect us by sending us, say, 1 kb of text. It might be the case that killing us all is the only way for 1 kb of text to reliably stop us from switching the superintelligence off.
Yes, true. I estimated that there are many scenarious where AI may kill us, like 50. I posted them here: http://lesswrong.com/lw/mgf/amap agifailures modesand levels/
We, with D.Denkenberger, wrote an article with the full list of the ways how AI catastrophe could happen and it is under review now. Kaj Sotala has another classification of such catastrophic types.
Hm, I noticed that your link showed up quite wonky. Here’s a fixed version:
http://lesswrong.com/lw/mgf/amap%20agifailures%20modesand%20levels/