Another reasom was to quantisize popular narrative that the Paperclipper is only interested in your atoms: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”.
This is a part of larger work, which was burried inside it, so I decided to take it out as a standalone result. The work is about the question: is it possible that non-aligned AI will not kill humans, and what could we do for it. The draft is here: https://goo.gl/YArqki
Could I ask what the motivation behind this post was?
Another reasom was to quantisize popular narrative that the Paperclipper is only interested in your atoms: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”.
Makes sense.
This is a part of larger work, which was burried inside it, so I decided to take it out as a standalone result. The work is about the question: is it possible that non-aligned AI will not kill humans, and what could we do for it. The draft is here: https://goo.gl/YArqki
Thanks! I think it makes sense to link it at the start, so new readers can get context for what you’re trying to do.
I alredy posted that content but it was met rather skepticaly.