I’m still up in the air regarding Eliezer’s arguments about CEV.
I have all kinds of ugh-factors coming in mind about not-good or at least not-‘PeterisP-good’ issues an aggregate of 6 billion hairless ape opinions would contain.
The ‘Extrapolated’ part is supposed to solve that; but in that sense I’d say that it turns the whole concept of this problem from knowledge extraction to the extrapolation. In my opinion, the difference between the volition of Random Joe and volition of Random Mohammad (forgive me for stereotyping for the sake of a short example) is much smaller than the difference between volition of Random Joe and the extrapolated volition of Random Joe ‘if he knew more, thought faster, was more the person he wishes he was’. Ergo, the idealistic CEV version of ‘asking everyone’ seems a bit futile. I could go into more detail, but in that case that’s probably material for a separate discussion, analyzing the parts of CEV point by point.
In that sense, it’s still futile. The whole reason for the discussion is that AI doesn’t really need permission or consent of anyone; the expected result is that AI—either friendly or unfriendly—will have the ability to enforce the goals of its design. Political reasons will be easily satisfied by a project that claims to try CEV/democracy but skips it in practice, as afterwards the political reasons will cease to have power.
Also, a ‘constitution’ matters only if it is within the goal system of a Friendly AI, otherwise it’s not worth the paper it’s written on.