First, the set of minds included in CEV is totally arbitrary, and hence, so will be the output. Why include only humans? Why not animals? Why not dead humans? Why not humans that weren’t born but might have been? Why not paperclip maximizers? Baby eaters? Pebble sorters? Suffering maximizers? Wherever you draw the line, there you’re already inserting your values into the process.
I agree that it is impossible to avoid inserting your values, and CEV does not work as a meta-ethical method of resolving ethical differences. However, it may be effective as a form of utilitarianism. It seems that CEV should include all current and future sentient beings, with their preferences weighted according to their level of sentience and, for future beings, their probability of coming into existence. (This will probably always be computationally infeasible, no matter how powerful computers get.)
I just thought of this, so I’d be interested to hear if others have any revisions or objections.
I agree that it is impossible to avoid inserting your values, and CEV does not work as a meta-ethical method of resolving ethical differences. However, it may be effective as a form of utilitarianism. It seems that CEV should include all current and future sentient beings, with their preferences weighted according to their level of sentience and, for future beings, their probability of coming into existence. (This will probably always be computationally infeasible, no matter how powerful computers get.)
I just thought of this, so I’d be interested to hear if others have any revisions or objections.