I think Eliezer’s attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making.
Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
He seems to believe that, but I dont see why anyone else should. Its like saying English is the only language, or the Earth is the only planet. If morality is having values, any number of entities could have values. If it’s rules for living in groups, ditto. If it’s fairness, ditto.
This is all strictly phrased in computational terms too
It’s not strictly phrased at all..It’s very hard to follow what he’s saying...or particularly computational.
I think Eliezer’s attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making.
He seems to believe that, but I dont see why anyone else should. Its like saying English is the only language, or the Earth is the only planet. If morality is having values, any number of entities could have values. If it’s rules for living in groups, ditto. If it’s fairness, ditto.
It’s not strictly phrased at all..It’s very hard to follow what he’s saying...or particularly computational.