I think even if AI proves strictly incapable of surviving in the long time due to various efficiency constraints, this has no relevance on its ability to kill us all.
A paperclip maximizer that eventually runs into a halting problem as it tries to paperclip itself may very well have killed everyone by that point.
I think the term for this is “minimal viable exterminator.”
I think even if AI proves strictly incapable of surviving in the long time due to various efficiency constraints, this has no relevance on its ability to kill us all.
A paperclip maximizer that eventually runs into a halting problem as it tries to paperclip itself may very well have killed everyone by that point.
I think the term for this is “minimal viable exterminator.”