[SEQ RERUN] Nonsentient Optimizers

Today’s post, Nonsentient Optimizers was originally published on 27 December 2008. A summary (taken from the LW wiki):

Discusses some of the problems of, and justification for, creating AIs that are knowably not conscious /​ sentient /​ people /​ citizens /​ subjective experiencers. We don’t want the AI’s models of people to be people—we don’t want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Nonperson Predicates, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.