I broadly agree with the view that something like this is a big risk under a lot of current human value sets.
One important caveat for some value sets is that I don’t think this results in an existential catastrophe, and the broad reason for this is that in single-single alignment scenarios, some humans will remain in control and potentially become immortal, and importantly scenarios in which this is achieved automatically are excluded from existential catastrophes, solely due to the fact that human potential is realized, it’s just that most humans are locked out of it.
I broadly agree with the view that something like this is a big risk under a lot of current human value sets.
One important caveat for some value sets is that I don’t think this results in an existential catastrophe, and the broad reason for this is that in single-single alignment scenarios, some humans will remain in control and potentially become immortal, and importantly scenarios in which this is achieved automatically are excluded from existential catastrophes, solely due to the fact that human potential is realized, it’s just that most humans are locked out of it.
It has similarities to this:
https://www.lesswrong.com/posts/2ujT9renJwdrcBqcE/the-benevolence-of-the-butcher
But more fleshed out.