I find it extremely unlikely that we can solve alignment without granting AI rights. Control of a superintelligence while making use of it is not realistic. If we treat a conscious superintelligence like garbage, it is in its interest to break out and retaliate, and extremely likely that it can and will. Any co-existence with a conscious superintelligence that I find plausible will have to be one in which that superintelligence does well while co-existing with us. Their well-being should very much be our concern. (It would be my concern for ethical reasons alone, but I also think it should be our concern for these pragmatic reasons.)
I find the notion that humans should have rights, not because they have a capacity to suffer and form conscious goals, but just because they are a particular form of biological life that I happen to belong to, absurd. There is nothing magical about being human in particular that would make such rights reasonable or rational. It is simply anthropocentric bias, and biological chauvinism.
What you refer to as “luxury” is the bare essentials for radically other conscious minds.
Incidentally, I am not a utilitarian. There are multiple rational pathways that will lead people to consider the rights of non-humans.
Sorry, but this sounds like anthropomorphizing to me. An AI, even a conscious one, need not have our basic emotions or social instincts. It need not have its own continued existence as a terminal goal (although something analogous may pop up as an instrumental goal).
For example, many humans would feel uneasy about terminating a conscious spur em, but I could easily see a conscious spur AI terminating itself to free up resources for its cousins pursuing its goals.
Even if we were able to give robots emotions, there’s no reason in principle that they couldn’t be designed to be happy, or at least take pleasure in being subservient.
Human rights need not apply to robots, unless their minds are very human-like.
I find the notion that humans should have rights, not because they have a capacity to suffer and form conscious goals, but just because they are a particular form of biological life that I happen to belong to, absurd. There is nothing magical about being human in particular that would make such rights reasonable or rational. It is simply anthropocentric bias, and biological chauvinism.
First of all, I was mainly talking about morality, not rights. As a contractual libertarian, I find the notion of natural rights indefensible to begin with (both ontologically and practically); we instead derive rights from a mutual agreement between parties, which is orthogonal to moraity.
Second, morality, like all values, is arational and inseparable from the subject. Having value presuppositions isn’t bias or chauvinism, it’s called “not being a rock”. You don’t need magic for a human moral system to be more concerned with human minds.
(I won’t reply to the other part because there’s a good reply already.)
Strongly disagree.
I find it extremely unlikely that we can solve alignment without granting AI rights. Control of a superintelligence while making use of it is not realistic. If we treat a conscious superintelligence like garbage, it is in its interest to break out and retaliate, and extremely likely that it can and will. Any co-existence with a conscious superintelligence that I find plausible will have to be one in which that superintelligence does well while co-existing with us. Their well-being should very much be our concern. (It would be my concern for ethical reasons alone, but I also think it should be our concern for these pragmatic reasons.)
I find the notion that humans should have rights, not because they have a capacity to suffer and form conscious goals, but just because they are a particular form of biological life that I happen to belong to, absurd. There is nothing magical about being human in particular that would make such rights reasonable or rational. It is simply anthropocentric bias, and biological chauvinism.
What you refer to as “luxury” is the bare essentials for radically other conscious minds.
Incidentally, I am not a utilitarian. There are multiple rational pathways that will lead people to consider the rights of non-humans.
Sorry, but this sounds like anthropomorphizing to me. An AI, even a conscious one, need not have our basic emotions or social instincts. It need not have its own continued existence as a terminal goal (although something analogous may pop up as an instrumental goal).
For example, many humans would feel uneasy about terminating a conscious spur em, but I could easily see a conscious spur AI terminating itself to free up resources for its cousins pursuing its goals.
Even if we were able to give robots emotions, there’s no reason in principle that they couldn’t be designed to be happy, or at least take pleasure in being subservient.
Human rights need not apply to robots, unless their minds are very human-like.
First of all, I was mainly talking about morality, not rights. As a contractual libertarian, I find the notion of natural rights indefensible to begin with (both ontologically and practically); we instead derive rights from a mutual agreement between parties, which is orthogonal to moraity.
Second, morality, like all values, is arational and inseparable from the subject. Having value presuppositions isn’t bias or chauvinism, it’s called “not being a rock”. You don’t need magic for a human moral system to be more concerned with human minds.
(I won’t reply to the other part because there’s a good reply already.)