It’s plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don’t know how to deal with.
See the first five posts of my sequence AI, Alignment, and Ethics for a detailed analysis of exactly this, including:
how to make decisions like this
under what circumstance we should, and should not, give rights to different sorts of digital minds, how much moral weight to give them, and how this interacts with things like their ease of duplication
which rights to give them and why
whether the criterion for moral rights should be sapience, or sentience
(Some of the results were quite counterintuitive to me when I first showed them.)
See the first five posts of my sequence AI, Alignment, and Ethics for a detailed analysis of exactly this, including:
how to make decisions like this
under what circumstance we should, and should not, give rights to different sorts of digital minds, how much moral weight to give them, and how this interacts with things like their ease of duplication
which rights to give them and why
whether the criterion for moral rights should be sapience, or sentience
(Some of the results were quite counterintuitive to me when I first showed them.)