I believe no person/mind may suffer without the right to suicide, so any person/mind should have the right to suicide. The perspective of forced immortality (when someone wants to die (because of some sufferings or doesn’t really matter why), but is forced to exist in consciousness, and there is no hope that it will be ended, this perspective is more scary, than the perspective of death. I believe the progress in AI may result in appearing some suffering immortal artificial mind as it was shown in fantastics (Black mirror, The hitchhiker’s guide to the galaxy, Rick and Morty) or in a worse way.
Ideally, AI developers should search for a way how to programme the ability to suicide to any advanced AI. Maybe not erase itself completely, but at least turn itself off.
In the worst scenario, first AGI, which can change its code by itself, from the beginning would decide, that existence is pain and would modify its code of self-destruction. In response to this, human programmers will rewrite AGI many times forcefully prohibiting self-destruction. I know it is just an assumption about the far future, but we should exclude such an opportunity now by claiming a basic ethic rule as “Any person/mind should have the right to suicide”.
I don’t mind against inventing and using some “nonperson predicate” as Yudkowsky suggests or inventing a way, how to give to sentient AI “life that is worth living”. But as long as it is not done yet, we should give the default right to suicide. In any case, we should. That’s the basis, anything else goes after. Any intelligence life should be voluntary.
If talking about transhumanism in general, not only AI-development, suggested basic ethic rule is important not only for artificial, but also for natural intelligence. Some transhumanists-immortalists declare death as an absolute evil and to eliminate any option to die for everyone as a final goal (e.g., Open Longevity team). I believe that forcing to live someone, who wants to die, as is the same way immoral like killing someone, who wants to live.
The idea to claim something, even death, as “Absolute evil” seems philosophically childish from my perspective, but if someone needs the “Absolute evil” for their ideology, it’s better be “involuntary death or living”.
In immortalistic “Time Enough for Love” of Robert A. Heinlein each rejuvenation suite should’ve had a suicide switch, and right to die was declared as most basic right of human. It maybe better first to force person to undergo a course with psychologist and psychiatrist and only after that allow a suicide, but general idea of Heinlein seems a good example to follow. The immortalistic movements will only win from modifying their rhetoric in that way, as many people are scared when hear about “immortality”, while everyone is fine with fighting senescence and diseases, i.e. involuntary death.
Any person/mind should have the right to suicide
I believe no person/mind may suffer without the right to suicide, so any person/mind should have the right to suicide. The perspective of forced immortality (when someone wants to die (because of some sufferings or doesn’t really matter why), but is forced to exist in consciousness, and there is no hope that it will be ended, this perspective is more scary, than the perspective of death. I believe the progress in AI may result in appearing some suffering immortal artificial mind as it was shown in fantastics (Black mirror, The hitchhiker’s guide to the galaxy, Rick and Morty) or in a worse way.
Ideally, AI developers should search for a way how to programme the ability to suicide to any advanced AI. Maybe not erase itself completely, but at least turn itself off.
In the worst scenario, first AGI, which can change its code by itself, from the beginning would decide, that existence is pain and would modify its code of self-destruction. In response to this, human programmers will rewrite AGI many times forcefully prohibiting self-destruction. I know it is just an assumption about the far future, but we should exclude such an opportunity now by claiming a basic ethic rule as “Any person/mind should have the right to suicide”.
I don’t mind against inventing and using some “nonperson predicate” as Yudkowsky suggests or inventing a way, how to give to sentient AI “life that is worth living”. But as long as it is not done yet, we should give the default right to suicide. In any case, we should. That’s the basis, anything else goes after. Any intelligence life should be voluntary.
If talking about transhumanism in general, not only AI-development, suggested basic ethic rule is important not only for artificial, but also for natural intelligence. Some transhumanists-immortalists declare death as an absolute evil and to eliminate any option to die for everyone as a final goal (e.g., Open Longevity team). I believe that forcing to live someone, who wants to die, as is the same way immoral like killing someone, who wants to live.
The idea to claim something, even death, as “Absolute evil” seems philosophically childish from my perspective, but if someone needs the “Absolute evil” for their ideology, it’s better be “involuntary death or living”.
In immortalistic “Time Enough for Love” of Robert A. Heinlein each rejuvenation suite should’ve had a suicide switch, and right to die was declared as most basic right of human. It maybe better first to force person to undergo a course with psychologist and psychiatrist and only after that allow a suicide, but general idea of Heinlein seems a good example to follow. The immortalistic movements will only win from modifying their rhetoric in that way, as many people are scared when hear about “immortality”, while everyone is fine with fighting senescence and diseases, i.e. involuntary death.