I think I agree with your take on this Abram.
The most extreme version of an AI not being self-defensive seems like the Greg Egan “permutation city” story where shutting down a simulation doesn’t even harm anyone inside—that computation just picks some other subtract “out of the dust.”
By the way, this post dovetails interestingly with my latest on alignment as uploading with more steps.
I think I agree with your take on this Abram.
The most extreme version of an AI not being self-defensive seems like the Greg Egan “permutation city” story where shutting down a simulation doesn’t even harm anyone inside—that computation just picks some other subtract “out of the dust.”
By the way, this post dovetails interestingly with my latest on alignment as uploading with more steps.