I don’t think we know enough about human beliefs to say that anarchy (in the form of individual, indexical valuations) isn’t a fundamental component of our CEV. We _like_ making individual choices, even when those choices are harmful or risky.
What’s the friendly-AI take on removing (important aspects of) humanity in order to further intelligence preservation and expansion?
I don’t think we know enough about human beliefs to say that anarchy (in the form of individual, indexical valuations) isn’t a fundamental component of our CEV. We _like_ making individual choices, even when those choices are harmful or risky.
What’s the friendly-AI take on removing (important aspects of) humanity in order to further intelligence preservation and expansion?