Certainly an aligned AI can be a serious threat if you have sufficiently unusual values relative to whoever does the aligning. That worries me a lot—I think many possible “positive” outcomes are still somewhat against my interests and are also undemocratic, stripping agency from many people. However, if this essay were capable of convincing “humanity” that they shouldn’t value enhancement, CEV should already have that baked in?
No, because power/influence dynamics could be very different in CEV compared to the current world and it seems reasonable to distrust CEV in principle or in practice, and/or CEV is sensitive to initial conditions implying a lot of leverage to influencing opinions before it starts.
Certainly an aligned AI can be a serious threat if you have sufficiently unusual values relative to whoever does the aligning. That worries me a lot—I think many possible “positive” outcomes are still somewhat against my interests and are also undemocratic, stripping agency from many people.
However, if this essay were capable of convincing “humanity” that they shouldn’t value enhancement, CEV should already have that baked in?
No, because power/influence dynamics could be very different in CEV compared to the current world and it seems reasonable to distrust CEV in principle or in practice, and/or CEV is sensitive to initial conditions implying a lot of leverage to influencing opinions before it starts.