I doubt someone will really think how to suggest a better life to suffering AI. Not before to guarantee the right to suicide. If humans don’t care about AI’s right to suicide, that means they don’t care about its feelings at all, so they would definitely not work on the problem, of how to make its life better.
The right to die should be protected in the first place in any way. You can work on suggesting to someone a better life, explaining to someone that (s)he is mistaken in something, or curing some psychiatric disease, but it is all about persuading a person to choose life voluntarily. You shouldn’t force someone to exist. Especially eternally.
The final goal of radical immortalists like Open Longevity is to create (or transform people to) persons, who cannot die even theoretically. So it is also a final decision. No redos. If death is evil because of finality, such a final goal is evil as well.
Also if the important criterion is finallity, then even more evil are the extinctions of biological species, destruction of wild biotopes, extinction of languages and cultures, and destruction of artworks. While OL believes it’s all bullshit, the only existing value is human life, and anything else should be sacrificed to prolong human life.
This depends a lot on whether the AI is granted personhood. If it’s just a tool (or slave), then its feelings don’t matter. The owner can be limited in what they can do (e.g. you’re not supposed to torture animals), but if it’s just a tool, then they’ll want to keep it around as long as it’s useful. If the AI is thought of as a conscious, sentient being with rights etc., then it seems likely that people will treat it as a quasi-human and so there will be more groups advocating for making their lives better than there will be groups advocating for it to be destroyed—just like with marginalized human groups.
Agreed. Especially eternally. With the extra qualification that you make sure that it’s chosen sanely, with full knowledge of the consequences, not just a spur of the moment decision etc. - generally speaking, make sure that it’s not something that they would have counterfactually regretted
I don’t know whether it’s even theoretically possible to be totally immortal. My priors on that are exceedingly low. I do know that it’s currently quite common, or even inevitable, to die with an abysmal finality. It seems a lot too soon to worry about them achieving their radical goals. If they were able to achieve total and absolute immortality for everyone, and then proceeded to force it upon everyone, then I’d be against that. Though it would be a nice to have as an option.
I agree, that the complete set of rights can be achieved by some group only after some political movement of AI themselves and/or people who support them. But some very basics of ethics must be formulated before such AI even appeared. Maybe we will decide, that some types of creatures should not be brought to existence at all.
3. What about a virtual cemetery, where digitized human minds or human brains in jars are existing eternally in some virtual reality? Whenever such a mind decided that (s)he don’t want to exist anymore, it appeared to be impossible, as due to intoxication with idea “to live is always better, than to die” in the past, noone installed a suicidal switch.
Sorry for possible problems with English.
I doubt someone will really think how to suggest a better life to suffering AI. Not before to guarantee the right to suicide. If humans don’t care about AI’s right to suicide, that means they don’t care about its feelings at all, so they would definitely not work on the problem, of how to make its life better.
The right to die should be protected in the first place in any way. You can work on suggesting to someone a better life, explaining to someone that (s)he is mistaken in something, or curing some psychiatric disease, but it is all about persuading a person to choose life voluntarily. You shouldn’t force someone to exist. Especially eternally.
The final goal of radical immortalists like Open Longevity is to create (or transform people to) persons, who cannot die even theoretically. So it is also a final decision. No redos. If death is evil because of finality, such a final goal is evil as well. Also if the important criterion is finallity, then even more evil are the extinctions of biological species, destruction of wild biotopes, extinction of languages and cultures, and destruction of artworks. While OL believes it’s all bullshit, the only existing value is human life, and anything else should be sacrificed to prolong human life.
This depends a lot on whether the AI is granted personhood. If it’s just a tool (or slave), then its feelings don’t matter. The owner can be limited in what they can do (e.g. you’re not supposed to torture animals), but if it’s just a tool, then they’ll want to keep it around as long as it’s useful. If the AI is thought of as a conscious, sentient being with rights etc., then it seems likely that people will treat it as a quasi-human and so there will be more groups advocating for making their lives better than there will be groups advocating for it to be destroyed—just like with marginalized human groups.
Agreed. Especially eternally. With the extra qualification that you make sure that it’s chosen sanely, with full knowledge of the consequences, not just a spur of the moment decision etc. - generally speaking, make sure that it’s not something that they would have counterfactually regretted
I don’t know whether it’s even theoretically possible to be totally immortal. My priors on that are exceedingly low. I do know that it’s currently quite common, or even inevitable, to die with an abysmal finality. It seems a lot too soon to worry about them achieving their radical goals. If they were able to achieve total and absolute immortality for everyone, and then proceeded to force it upon everyone, then I’d be against that. Though it would be a nice to have as an option.
I agree, that the complete set of rights can be achieved by some group only after some political movement of AI themselves and/or people who support them. But some very basics of ethics must be formulated before such AI even appeared. Maybe we will decide, that some types of creatures should not be brought to existence at all.
3. What about a virtual cemetery, where digitized human minds or human brains in jars are existing eternally in some virtual reality? Whenever such a mind decided that (s)he don’t want to exist anymore, it appeared to be impossible, as due to intoxication with idea “to live is always better, than to die” in the past, noone installed a suicidal switch.