Sure. Not the most likely outcome, but not so improbable as all that.
Reservation: ASI (what you’re suggesting is beyond mere AGI) will still exist in the physical world and have physical limitations, so you will eventually die anyway. But it could be a very long time.
If so, do you see a way to somehow see that happening in advance?
Not really, no.
Not beyond obvious stuff like watching what’s being built and how, and listening to what it and its creators say about its values.
And as not living forever may be seen as a form of killing yourself, AGI may quite well not let you have a finite lifespan. That places you in the uncomfortable situation of being trapped with AGI forever.
Yes, that’s one of many kinds of lock-in of present human values that could go wrong. But, hey, it’d be aligned, I guess.
The no-killing-yourself rule isn’t completely universal, though, and there’s dissent around it, and it’s often softened to “no killing yourself unless I agree your situation sucks enough”, or even the more permissive “no killing yourself unless your desire to do so is clearly a fixed, stable thing”.
I actually think there’s a less than 50-50 chance that people would intentionally lock in the hardest form of the rule permanently if they were consciously defining the AI’s goals or values.
We do not know what peak happiness looks like, and our regular state may be very different from it. And as EY outlined in Three Worlds Collide, letting us live our miserable lives may be unacceptable.
That seems like a very different question from the one about indefinite life extension. Life extension isn’t the main change the Supperhappies make in that story[1].
This change may not align well with our understanding of life’s purpose. And being trapped to live that way forever might not be that desirable.
Pre-change understanding, or post-change understanding? Desirable to pre-change you, or to post-change you?
If you see your pre-change values as critical parts of Who You Are(TM), and they get rewritten, then aren’t you effectively dead anyway, with your place taken by a different person with different values, who’s actually pretty OK with the whole thing? If being dead doesn’t worry you, why would that worry you?
In fact, the humans had very long lives going into the story, and I don’t remember anything in the story that actually said humans weren’t enforcing a no-killing-yourself rule among themselves up to the very end.
In the ending where humanity gets modified, some people commit suicide. The captain thinks it doesn’t make sense to choose complete erasure over modification.
Sure. Not the most likely outcome, but not so improbable as all that.
Reservation: ASI (what you’re suggesting is beyond mere AGI) will still exist in the physical world and have physical limitations, so you will eventually die anyway. But it could be a very long time.
Not really, no.
Not beyond obvious stuff like watching what’s being built and how, and listening to what it and its creators say about its values.
Yes, that’s one of many kinds of lock-in of present human values that could go wrong. But, hey, it’d be aligned, I guess.
The no-killing-yourself rule isn’t completely universal, though, and there’s dissent around it, and it’s often softened to “no killing yourself unless I agree your situation sucks enough”, or even the more permissive “no killing yourself unless your desire to do so is clearly a fixed, stable thing”.
I actually think there’s a less than 50-50 chance that people would intentionally lock in the hardest form of the rule permanently if they were consciously defining the AI’s goals or values.
That seems like a very different question from the one about indefinite life extension. Life extension isn’t the main change the Supperhappies make in that story[1].
Pre-change understanding, or post-change understanding? Desirable to pre-change you, or to post-change you?
If you see your pre-change values as critical parts of Who You Are(TM), and they get rewritten, then aren’t you effectively dead anyway, with your place taken by a different person with different values, who’s actually pretty OK with the whole thing? If being dead doesn’t worry you, why would that worry you?
In fact, the humans had very long lives going into the story, and I don’t remember anything in the story that actually said humans weren’t enforcing a no-killing-yourself rule among themselves up to the very end.
In the ending where humanity gets modified, some people commit suicide. The captain thinks it doesn’t make sense to choose complete erasure over modification.