Sorry, but this sounds like anthropomorphizing to me. An AI, even a conscious one, need not have our basic emotions or social instincts. It need not have its own continued existence as a terminal goal (although something analogous may pop up as an instrumental goal).
For example, many humans would feel uneasy about terminating a conscious spur em, but I could easily see a conscious spur AI terminating itself to free up resources for its cousins pursuing its goals.
Even if we were able to give robots emotions, there’s no reason in principle that they couldn’t be designed to be happy, or at least take pleasure in being subservient.
Human rights need not apply to robots, unless their minds are very human-like.
Sorry, but this sounds like anthropomorphizing to me. An AI, even a conscious one, need not have our basic emotions or social instincts. It need not have its own continued existence as a terminal goal (although something analogous may pop up as an instrumental goal).
For example, many humans would feel uneasy about terminating a conscious spur em, but I could easily see a conscious spur AI terminating itself to free up resources for its cousins pursuing its goals.
Even if we were able to give robots emotions, there’s no reason in principle that they couldn’t be designed to be happy, or at least take pleasure in being subservient.
Human rights need not apply to robots, unless their minds are very human-like.