Unknown: As I’ve stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality.
Unknown, do I really strike you as the sort of person who would do something that awful just because I was “morally obliged” to do it? Screw moral obligation. I can be nice in defiance of morality itself, if I have to be.
Of course this really amounts to saying that I disagree with your notion of what I am “morally obliged” to do. Exercise: Find a way of construing ‘moral obligation’ that does not automatically ‘morally obligate’ someone to take over the world. Hint: Use a morality more complicated than that involved in maximizing paperclips.
Allan: The problem is why we should choose it over Morality_11283.
You just used the word “should”. If it doesn’t mean Morality_8472, or some Morality_X, what does it mean? How do you expect to choose between successor moralities without initial morality?
I personally think there could be an objective foundation for morality, but I wouldn’t expect to persuade a paperclip maximizer.
This just amounts to defining should as an abstract computation, and then excluding all minds that calculate a different rule-of-action as “choosing based on something other than morality”. In what sense is the morality objective, besides the several senses I’ve already defined, if it doesn’t persuade a paperclip maximizer?
Allan: The problem is why we should choose it over Morality_11283.
You just used the word “should”. If it doesn’t mean Morality_8472, or some Morality_X, what does it mean?
Whatever is rationally preferable. The whole point of doing moral philosophy is that you already have a set of ethically-neutral epistemic norms you can address metaethcial issues with.
Unknown: As I’ve stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality.
Unknown, do I really strike you as the sort of person who would do something that awful just because I was “morally obliged” to do it? Screw moral obligation. I can be nice in defiance of morality itself, if I have to be.
Of course this really amounts to saying that I disagree with your notion of what I am “morally obliged” to do. Exercise: Find a way of construing ‘moral obligation’ that does not automatically ‘morally obligate’ someone to take over the world. Hint: Use a morality more complicated than that involved in maximizing paperclips.
Allan: The problem is why we should choose it over Morality_11283.
You just used the word “should”. If it doesn’t mean Morality_8472, or some Morality_X, what does it mean? How do you expect to choose between successor moralities without initial morality?
I personally think there could be an objective foundation for morality, but I wouldn’t expect to persuade a paperclip maximizer.
This just amounts to defining should as an abstract computation, and then excluding all minds that calculate a different rule-of-action as “choosing based on something other than morality”. In what sense is the morality objective, besides the several senses I’ve already defined, if it doesn’t persuade a paperclip maximizer?
Whatever is rationally preferable. The whole point of doing moral philosophy is that you already have a set of ethically-neutral epistemic norms you can address metaethcial issues with.