I think there are reasonable shared values, but unless you keep things hopelessly vague (which I don’t think you can or you get other problems) then you’re sure to run into contradictions.
And yeah, some sets of values, even if seemingly benign, will have unintended dangerous consequences in the hands of a superintelligent being. Not even solving human morality (which is already essentially impossible) would suffice for sure.
I think there are reasonable shared values, but unless you keep things hopelessly vague (which I don’t think you can or you get other problems) then you’re sure to run into contradictions.
And yeah, some sets of values, even if seemingly benign, will have unintended dangerous consequences in the hands of a superintelligent being. Not even solving human morality (which is already essentially impossible) would suffice for sure.