That depends who “we” is referring to. What I meant to say is that if the FAI programmers and other intellectuals believe that extrapolated humans in general might not care about harming uploads (future humans with no voice in that CEV) - whereas a selective CEV of intellectuals is expected to do so—then they should consider this when deciding whether to allow all of humanity to have a say in the CEV, rather than a subset of minds whom they consider to be safer in that regard.
So even if the implementation of a universal CEV as initial dynamic can be expected to reflect the desires of humanity en masse better than a selective initial CEV, this doesn’t define the total moral space that should be of concern to those responsible for and having the power to influence that implementation.
That depends who “we” is referring to. What I meant to say is that if the FAI programmers and other intellectuals believe that extrapolated humans in general might not care about harming uploads (future humans with no voice in that CEV) - whereas a selective CEV of intellectuals is expected to do so—then they should consider this when deciding whether to allow all of humanity to have a say in the CEV, rather than a subset of minds whom they consider to be safer in that regard.
So even if the implementation of a universal CEV as initial dynamic can be expected to reflect the desires of humanity en masse better than a selective initial CEV, this doesn’t define the total moral space that should be of concern to those responsible for and having the power to influence that implementation.