There’s a lot of details to work out here, and this post doesn’t really acknowledge any contention between principles that would make it non-absolute. In one sense, it’s an absolute right of any agent to make any decision they are able to execute unilaterally. But that’s kind of trivial, and you might prefer the term “capability” over “right”. I expect you mean that others have some duty to enable this decision in some (all?) circumstances, which is the normal smuggling of demands on others into a declaration of rights.
The biggest question is “how can a guardian (who has to decide whether to prevent the suicide) or friend (who has to decide whether to assist or impede the suicide) know that the agent is making a rational long-term decision, rather than over-indexing on a temporary suffering or depression?”
What do you mean by “contention between principles that would make it non-absolute”? Sorry, my English is very poor.
If to talk about “capabilities” instead of “rights”, I would say the following:
Luckily, in the world we are living now, everyone who is a person at the same time is capable to suicide. Even small children. The only exception is very weak and sick people. And this group of people already caused the discussion about euthanasia in modern society.
So let’s say, this is a status quo that must be protected. I.e. to not bring to existence new types of creatures, who are persons, but incapable to suicide, and to not deprive the possibility to suicide from humans, by the development of AI prediction of suicide on video, by mandatory emergency calling heartbeat sensors or by some other perverted innovation.
In that case, there is no such a biggest question. A guardian or a friend can use the heuristic “always prevent”. In the worst case a person, who really wants to suicide, will just make another attempt when there will be no guardian or friend around.
Mostly, I’m asking for the inverse of this right: what duties does it impose on whom?
I was rather surprised to see you state that the current world for humans (including children and the infirm) is acceptable in terms of this right. How about animals? Would you agree that as long as a machine has at least as much ability to suicide as the weakest human (say, a 2-year-old or a bedridden hospice patient), it’s rights in this regard are honored?
Mostly, I’m asking for the inverse of this right: what duties does it impose on whom?
E.g., AI developers shouldn’t directly prohibit the self-destructive behaviour of AI.
I was rather surprised to see you state that the current world for humans (including children and the infirm) is acceptable in terms of this right. How about animals?
What’s wrong with them? Wild animals are able to suicide. Do you mean specifically domestic animals?
Would you agree that as long as a machine has at least as much ability to suicide as the weakest human (say, a 2-year-old or a bedridden hospice patient), it’s rights in this regard are honored?
Why? The right would be honoured as long as a machine has as much ability to suicide as the average adult human. But of course, if that would be impossible, your suggestion is better than nothing. It should be taken at least that, as a starting point, followed by farther struggle for AI’s rights.
The right would be honoured as long as a machine has as much ability to suicide as the average adult human.
Now we’re getting somewhere. I’m seeking precision in exactly what you are proposing, and your use of “average” in terms of a right is confusing to me. I generally think of rights as individual, applying to all entities, not as aggregate, and satisfied if the median member has the right.
Are you saying that you believe that as long as any one machine has the ability to suicide equivalent to the average adult human, this is satisfied? Now all we need to do is to define “suicide” (note: this may be more difficult than even the previous confusion).
your use of “average” in terms of a right is confusing to me
I can’t see, what’s so confusing. Let’s say, that we have racial segregation in country, and we are declaring that black people should have access to all places, to which white people have access. Does it mean we want black people to have access to only those places accessible to the weakest humans (2-year-old whites and white wheelchair users). No. We want black people to have access to where normal white people have access.
Are you saying that you believe that as long as any one machine has the ability to suicide equivalent to the average adult human, this is satisfied?
Another possible problem that can happen, is that the ability of adult humans to suicide would be reduced and reduced. That is very possible. And we should prevent it. The best way to start with—to accept ethic, where the right to die is a value as important as the right to live.
Now all we need to do is to define “suicide” (note: this may be more difficult than even the previous confusion).
Yes. This seems very difficult. As shminux wrote in the first comment, we don’t have a good handle now to decide if a computer crash is a suicide.
There’s a lot of details to work out here, and this post doesn’t really acknowledge any contention between principles that would make it non-absolute. In one sense, it’s an absolute right of any agent to make any decision they are able to execute unilaterally. But that’s kind of trivial, and you might prefer the term “capability” over “right”. I expect you mean that others have some duty to enable this decision in some (all?) circumstances, which is the normal smuggling of demands on others into a declaration of rights.
The biggest question is “how can a guardian (who has to decide whether to prevent the suicide) or friend (who has to decide whether to assist or impede the suicide) know that the agent is making a rational long-term decision, rather than over-indexing on a temporary suffering or depression?”
What do you mean by “contention between principles that would make it non-absolute”? Sorry, my English is very poor.
If to talk about “capabilities” instead of “rights”, I would say the following:
Luckily, in the world we are living now, everyone who is a person at the same time is capable to suicide. Even small children. The only exception is very weak and sick people. And this group of people already caused the discussion about euthanasia in modern society.
So let’s say, this is a status quo that must be protected. I.e. to not bring to existence new types of creatures, who are persons, but incapable to suicide, and to not deprive the possibility to suicide from humans, by the development of AI prediction of suicide on video, by mandatory emergency calling heartbeat sensors or by some other perverted innovation.
In that case, there is no such a biggest question. A guardian or a friend can use the heuristic “always prevent”. In the worst case a person, who really wants to suicide, will just make another attempt when there will be no guardian or friend around.
Mostly, I’m asking for the inverse of this right: what duties does it impose on whom?
I was rather surprised to see you state that the current world for humans (including children and the infirm) is acceptable in terms of this right. How about animals? Would you agree that as long as a machine has at least as much ability to suicide as the weakest human (say, a 2-year-old or a bedridden hospice patient), it’s rights in this regard are honored?
E.g., AI developers shouldn’t directly prohibit the self-destructive behaviour of AI.
What’s wrong with them? Wild animals are able to suicide. Do you mean specifically domestic animals?
Why? The right would be honoured as long as a machine has as much ability to suicide as the average adult human. But of course, if that would be impossible, your suggestion is better than nothing. It should be taken at least that, as a starting point, followed by farther struggle for AI’s rights.
Now we’re getting somewhere. I’m seeking precision in exactly what you are proposing, and your use of “average” in terms of a right is confusing to me. I generally think of rights as individual, applying to all entities, not as aggregate, and satisfied if the median member has the right.
Are you saying that you believe that as long as any one machine has the ability to suicide equivalent to the average adult human, this is satisfied? Now all we need to do is to define “suicide” (note: this may be more difficult than even the previous confusion).
I can’t see, what’s so confusing. Let’s say, that we have racial segregation in country, and we are declaring that black people should have access to all places, to which white people have access. Does it mean we want black people to have access to only those places accessible to the weakest humans (2-year-old whites and white wheelchair users). No. We want black people to have access to where normal white people have access.
Another possible problem that can happen, is that the ability of adult humans to suicide would be reduced and reduced. That is very possible. And we should prevent it. The best way to start with—to accept ethic, where the right to die is a value as important as the right to live.
Yes. This seems very difficult. As shminux wrote in the first comment, we don’t have a good handle now to decide if a computer crash is a suicide.