Send me anonymous feedback: https://docs.google.com/forms/d/e/1FAIpQLScLKiFJbQiuRYBhrBbVYUo_c6Xf0f8DN_blbfpJ-2Ml39g1zA/viewform
Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.
Some quick info about me:
I have a background in computer science (BSc+MSc; my MSc thesis was in NLP and ML, though not in deep learning).
You can also find me on the EA Forum.
Feel free to reach out by sending me a PM. (Update: I’ve turned off email notifications for private messages. If you send me a time sensitive PM, consider also pinging me about it via the anonymous feedback link above.)
I know you’re not arguing here that using our atoms for something else is the only (or most likely?) reason for a superintelligence to harm us. But just in case some reader gets this impression, here’s another reason:
If the utility of the superintelligence is not perfectly aligned with “our utility”, then at some point we’ll probably want to switch the superintelligence off. So from the perspective of the superintelligence, the current configuration of our atoms might be very net negative. Suppose the superintelligence is boxed and can only affect us by sending us, say, 1 kb of text. It might be the case that killing us all is the only way for 1 kb of text to reliably stop us from switching the superintelligence off.