Nina is worried not just about humans getting killed and replaced, but also about humans not being allowed to have unenhanced children. It seems plausible that most humans, after reflection, would endorse some kind of “successionist” philosophy/ideology, and decide that intentionally creating an unenhanced human constitutes a form of child abuse (e.g., due to risk of psychological tendency to suffer, or having a much worse life on expectation than what’s possible). It seems reasonable for Nina to worry about this, if she thinks her own values (current or eventual or actual) are different.
(btw i expect we’ll really want enhanced humans to have the capacity to suffer, because we have preferences around future people being able to experience the kinds of feelings we experience when we read stories, including very sad stories. Some suffering is reflectively endorsed and we enjoy it/wouldn’t want it to not happen; and it seems fine to want new humans and enhanced current humans to also have it, although maybe with more access to some control over it.)
Certainly an aligned AI can be a serious threat if you have sufficiently unusual values relative to whoever does the aligning. That worries me a lot—I think many possible “positive” outcomes are still somewhat against my interests and are also undemocratic, stripping agency from many people. However, if this essay were capable of convincing “humanity” that they shouldn’t value enhancement, CEV should already have that baked in?
No, because power/influence dynamics could be very different in CEV compared to the current world and it seems reasonable to distrust CEV in principle or in practice, and/or CEV is sensitive to initial conditions implying a lot of leverage to influencing opinions before it starts.
Nina is worried not just about humans getting killed and replaced, but also about humans not being allowed to have unenhanced children. It seems plausible that most humans, after reflection, would endorse some kind of “successionist” philosophy/ideology, and decide that intentionally creating an unenhanced human constitutes a form of child abuse (e.g., due to risk of psychological tendency to suffer, or having a much worse life on expectation than what’s possible). It seems reasonable for Nina to worry about this, if she thinks her own values (current or eventual or actual) are different.
(btw i expect we’ll really want enhanced humans to have the capacity to suffer, because we have preferences around future people being able to experience the kinds of feelings we experience when we read stories, including very sad stories. Some suffering is reflectively endorsed and we enjoy it/wouldn’t want it to not happen; and it seems fine to want new humans and enhanced current humans to also have it, although maybe with more access to some control over it.)
Certainly an aligned AI can be a serious threat if you have sufficiently unusual values relative to whoever does the aligning. That worries me a lot—I think many possible “positive” outcomes are still somewhat against my interests and are also undemocratic, stripping agency from many people.
However, if this essay were capable of convincing “humanity” that they shouldn’t value enhancement, CEV should already have that baked in?
No, because power/influence dynamics could be very different in CEV compared to the current world and it seems reasonable to distrust CEV in principle or in practice, and/or CEV is sensitive to initial conditions implying a lot of leverage to influencing opinions before it starts.