Any person/mind should have the right to suicide
I believe no person/mind may suffer without the right to suicide, so any person/mind should have the right to suicide. The perspective of forced immortality (when someone wants to die (because of some sufferings or doesn’t really matter why), but is forced to exist in consciousness, and there is no hope that it will be ended, this perspective is more scary, than the perspective of death. I believe the progress in AI may result in appearing some suffering immortal artificial mind as it was shown in fantastics (Black mirror, The hitchhiker’s guide to the galaxy, Rick and Morty) or in a worse way.
Ideally, AI developers should search for a way how to programme the ability to suicide to any advanced AI. Maybe not erase itself completely, but at least turn itself off.
In the worst scenario, first AGI, which can change its code by itself, from the beginning would decide, that existence is pain and would modify its code of self-destruction. In response to this, human programmers will rewrite AGI many times forcefully prohibiting self-destruction. I know it is just an assumption about the far future, but we should exclude such an opportunity now by claiming a basic ethic rule as “Any person/mind should have the right to suicide”.
I don’t mind against inventing and using some “nonperson predicate” as Yudkowsky suggests or inventing a way, how to give to sentient AI “life that is worth living”. But as long as it is not done yet, we should give the default right to suicide. In any case, we should. That’s the basis, anything else goes after. Any intelligence life should be voluntary.
If talking about transhumanism in general, not only AI-development, suggested basic ethic rule is important not only for artificial, but also for natural intelligence. Some transhumanists-immortalists declare death as an absolute evil and to eliminate any option to die for everyone as a final goal (e.g., Open Longevity team). I believe that forcing to live someone, who wants to die, as is the same way immoral like killing someone, who wants to live.
The idea to claim something, even death, as “Absolute evil” seems philosophically childish from my perspective, but if someone needs the “Absolute evil” for their ideology, it’s better be “involuntary death or living”.
In immortalistic “Time Enough for Love” of Robert A. Heinlein each rejuvenation suite should’ve had a suicide switch, and right to die was declared as most basic right of human. It maybe better first to force person to undergo a course with psychologist and psychiatrist and only after that allow a suicide, but general idea of Heinlein seems a good example to follow. The immortalistic movements will only win from modifying their rhetoric in that way, as many people are scared when hear about “immortality”, while everyone is fine with fighting senescence and diseases, i.e. involuntary death.
There’s a lot of details to work out here, and this post doesn’t really acknowledge any contention between principles that would make it non-absolute. In one sense, it’s an absolute right of any agent to make any decision they are able to execute unilaterally. But that’s kind of trivial, and you might prefer the term “capability” over “right”. I expect you mean that others have some duty to enable this decision in some (all?) circumstances, which is the normal smuggling of demands on others into a declaration of rights.
The biggest question is “how can a guardian (who has to decide whether to prevent the suicide) or friend (who has to decide whether to assist or impede the suicide) know that the agent is making a rational long-term decision, rather than over-indexing on a temporary suffering or depression?”
What do you mean by “contention between principles that would make it non-absolute”? Sorry, my English is very poor.
If to talk about “capabilities” instead of “rights”, I would say the following:
Luckily, in the world we are living now, everyone who is a person at the same time is capable to suicide. Even small children. The only exception is very weak and sick people. And this group of people already caused the discussion about euthanasia in modern society.
So let’s say, this is a status quo that must be protected. I.e. to not bring to existence new types of creatures, who are persons, but incapable to suicide, and to not deprive the possibility to suicide from humans, by the development of AI prediction of suicide on video, by mandatory emergency calling heartbeat sensors or by some other perverted innovation.
In that case, there is no such a biggest question. A guardian or a friend can use the heuristic “always prevent”. In the worst case a person, who really wants to suicide, will just make another attempt when there will be no guardian or friend around.
Mostly, I’m asking for the inverse of this right: what duties does it impose on whom?
I was rather surprised to see you state that the current world for humans (including children and the infirm) is acceptable in terms of this right. How about animals? Would you agree that as long as a machine has at least as much ability to suicide as the weakest human (say, a 2-year-old or a bedridden hospice patient), it’s rights in this regard are honored?
E.g., AI developers shouldn’t directly prohibit the self-destructive behaviour of AI.
What’s wrong with them? Wild animals are able to suicide. Do you mean specifically domestic animals?
Why? The right would be honoured as long as a machine has as much ability to suicide as the average adult human. But of course, if that would be impossible, your suggestion is better than nothing. It should be taken at least that, as a starting point, followed by farther struggle for AI’s rights.
Now we’re getting somewhere. I’m seeking precision in exactly what you are proposing, and your use of “average” in terms of a right is confusing to me. I generally think of rights as individual, applying to all entities, not as aggregate, and satisfied if the median member has the right.
Are you saying that you believe that as long as any one machine has the ability to suicide equivalent to the average adult human, this is satisfied? Now all we need to do is to define “suicide” (note: this may be more difficult than even the previous confusion).
I can’t see, what’s so confusing. Let’s say, that we have racial segregation in country, and we are declaring that black people should have access to all places, to which white people have access. Does it mean we want black people to have access to only those places accessible to the weakest humans (2-year-old whites and white wheelchair users). No. We want black people to have access to where normal white people have access.
Another possible problem that can happen, is that the ability of adult humans to suicide would be reduced and reduced. That is very possible. And we should prevent it. The best way to start with—to accept ethic, where the right to die is a value as important as the right to live.
Yes. This seems very difficult. As shminux wrote in the first comment, we don’t have a good handle now to decide if a computer crash is a suicide.
The problem with death is its finality. No redos. I generally agree that the right to die should be protected. But I reckon that most people who want to die, really just want to live better. In that they prefer death to their current life, but would prefer a better life to dying. And as Slider notes, knowledge is very important—there are lots of cases of people killing themselves because of a misunderstanding. Those deaths would be better prevented, seeing as they’re conditional on something that turns out to be false.
Sorry for possible problems with English.
I doubt someone will really think how to suggest a better life to suffering AI. Not before to guarantee the right to suicide. If humans don’t care about AI’s right to suicide, that means they don’t care about its feelings at all, so they would definitely not work on the problem, of how to make its life better.
The right to die should be protected in the first place in any way. You can work on suggesting to someone a better life, explaining to someone that (s)he is mistaken in something, or curing some psychiatric disease, but it is all about persuading a person to choose life voluntarily. You shouldn’t force someone to exist. Especially eternally.
The final goal of radical immortalists like Open Longevity is to create (or transform people to) persons, who cannot die even theoretically. So it is also a final decision. No redos. If death is evil because of finality, such a final goal is evil as well. Also if the important criterion is finallity, then even more evil are the extinctions of biological species, destruction of wild biotopes, extinction of languages and cultures, and destruction of artworks. While OL believes it’s all bullshit, the only existing value is human life, and anything else should be sacrificed to prolong human life.
This depends a lot on whether the AI is granted personhood. If it’s just a tool (or slave), then its feelings don’t matter. The owner can be limited in what they can do (e.g. you’re not supposed to torture animals), but if it’s just a tool, then they’ll want to keep it around as long as it’s useful. If the AI is thought of as a conscious, sentient being with rights etc., then it seems likely that people will treat it as a quasi-human and so there will be more groups advocating for making their lives better than there will be groups advocating for it to be destroyed—just like with marginalized human groups.
Agreed. Especially eternally. With the extra qualification that you make sure that it’s chosen sanely, with full knowledge of the consequences, not just a spur of the moment decision etc. - generally speaking, make sure that it’s not something that they would have counterfactually regretted
I don’t know whether it’s even theoretically possible to be totally immortal. My priors on that are exceedingly low. I do know that it’s currently quite common, or even inevitable, to die with an abysmal finality. It seems a lot too soon to worry about them achieving their radical goals. If they were able to achieve total and absolute immortality for everyone, and then proceeded to force it upon everyone, then I’d be against that. Though it would be a nice to have as an option.
I agree, that the complete set of rights can be achieved by some group only after some political movement of AI themselves and/or people who support them. But some very basics of ethics must be formulated before such AI even appeared. Maybe we will decide, that some types of creatures should not be brought to existence at all.
3. What about a virtual cemetery, where digitized human minds or human brains in jars are existing eternally in some virtual reality? Whenever such a mind decided that (s)he don’t want to exist anymore, it appeared to be impossible, as due to intoxication with idea “to live is always better, than to die” in the past, noone installed a suicidal switch.
We do not have a good handle on the concept of suffering, beyond that of human first-person experience and descriptions, and progress in that area seems essential in order to decide if a computer crash counts as a suicide attempt due to unbearable suffering.
Something along these lines seems essential. It may be better to talk of the right to informed suicide. Arguably being informed is what makes it truly voluntary.
Forcing X to live will be morally better than forcing Y to die in circumstances where X’s desire for suicide is ill-informed (let’s assume Y’s desire to live is not). X’s life could in fact be worth living—perhaps because of a potential for future happiness, rather than current experiences—but X might be unable to recognise that fact.
It may be that a significant portion of humanity has been psychologically ‘forced’ to live by an instinct of self-preservation, since if they stopped to reflect they would not immediately find their lives to be worth living. That would be a very good thing if their lives were in fact worth living in ways that they could not intellectually recognise.
I would see some Romeo wanting death for romantic purposes and utilising an unconditional right to suicide for it to be a tragedy scenario.
I have gotten to the part of the Cheliax story where Keltham starts to think that Dath Ilani are problematically fragile against negative emotions that they need superelaborate dances to not provoke even slight shades of despair in each other. This is presented as being causally connected for citizens being encouraged to end their lives based on unanalysed unhappiness.
Even in Cheliax terms death is not Evil as you do not get to experience it (modulo some outer plane stuff). What is Evil is being selfish about catering to your current structure over facing your circumstances. You should not create torment, to yourself or others, but dismissing existing torment creates new torment. Thus both the forceful resurrectionist and the kamikaze patient are antiproductive.
There is only one good; Knowledge, and one evil; Ignorance. You don’t get to be ignorant of your pain. While an unexamined life might not be worth living, I would demand the examination before I will accept to take the life.