I’m not sure that holds for all moralities, but it seems likely for consequentialist moralities.
If someone is there to fight for a set of values, it’s more likely those values are achieved. I’d think this extends more generally to power through existence all of time and space—a consequentialist will be motivated to increase the power of other consequentialists (for the same consequences), and likely decrease the power of conflicting moralities.
Creatures like themselves
I assume the measure of “like” is similarity of morality, so that Clippy should make sure that beings with Clippy-like morality continue to exist.
If someone is there to fight for a set of values, it’s more likely those values are achieved.
This is certainly true, and one good reason to create moral beings. But I am also arguing that the creation of moral beings is a terminal goal of morality (providing, of course, that those moral beings live worthwhile lives).
I assume the measure of “like” is similarity of morality, so that Clippy should make sure that beings with Clippy-like morality continue to exist.
I think we are using somewhat different terminology. You seem to be using the word “morality” as a synonym for “values” or “utility function.” I tend to follow Eliezer’s lead and consider morality to be a sort of abstract concept that concerns the wellbeing of people and what sort of creatures should be created. This set of concepts is one that most humans care about, and strongly desire to promote, but other creatures may care about totally different concepts.
So I would not consider paperclip maximizers, babyeaters, sociopaths, to be creatures that have any type of morality. They care about completely different concepts than the typical human being does, and referring to the things they care about as “morality” is confusing the issue.
You seem to be using the word “morality” as a synonym for “values” or “utility function.”
Often I do, particularly when talking about moralities other than human.
Other times I will make what I consider a relevant functional distinction between general values and moral values in humans—moral values involves approval/disapproval in a recursive way—punishing an attitude, punishing failure to punish an attitude, punishing a failure to punish the failure to punish an attitude. etc. And similarly for rewards.
Elaborating on your point about supporting the continued existence of those who support your morality, on purely consequentialist grounds, Clippy is likely to engage in this kind of recursive punishment/reward as well, so he wouldn’t be entirely a Lovecraftian horror until the power of humans became inconsequential to his ends.
consider morality to be a sort of abstract concept that concerns the wellbeing of people
Likely Clippy will be “concerned” about the well being of people to the extent that such well being concerns the making of paperclips. Does that make him moral, according to your usage?
What kind of concern counts, and for which people must it apply? Concern is a very broad concept, with the alternative being unconcerned or indifferent. Clippy isn’t likely to be indifferent, and neither are sociopaths or baby eaters. Sociopaths and babyeaters are likely concerned about their own wellbeing, at least.
Clippy is likely to engage in this kind of recursive punishment/reward as well, so he wouldn’t be entirely a Lovecraftian horror until the power of humans became inconsequential to his ends.
I don’t deny that it may be useful to create some sort of creature with nonhuman values for instrumental reasons. We do that all the time now. For instance, we create domestic cats, which if they are neurologically advanced enough to have values at all, are probably rather sociopathic; because of their instrumental value as companions and mousers.
Likely Clippy will be “concerned” about the well being of people to the extent that such well being concerns the making of paperclips. Does that make him moral, according to your usage?
Sorry, I should have been clearer. In order to count as morality the concern for the wellbeing of others has to be a terminal value. In Clippy’s case his concern is only an instrumental value, so he isn’t moral.
I’m not sure that holds for all moralities, but it seems likely for consequentialist moralities.
If someone is there to fight for a set of values, it’s more likely those values are achieved. I’d think this extends more generally to power through existence all of time and space—a consequentialist will be motivated to increase the power of other consequentialists (for the same consequences), and likely decrease the power of conflicting moralities.
I assume the measure of “like” is similarity of morality, so that Clippy should make sure that beings with Clippy-like morality continue to exist.
This is certainly true, and one good reason to create moral beings. But I am also arguing that the creation of moral beings is a terminal goal of morality (providing, of course, that those moral beings live worthwhile lives).
I think we are using somewhat different terminology. You seem to be using the word “morality” as a synonym for “values” or “utility function.” I tend to follow Eliezer’s lead and consider morality to be a sort of abstract concept that concerns the wellbeing of people and what sort of creatures should be created. This set of concepts is one that most humans care about, and strongly desire to promote, but other creatures may care about totally different concepts.
So I would not consider paperclip maximizers, babyeaters, sociopaths, to be creatures that have any type of morality. They care about completely different concepts than the typical human being does, and referring to the things they care about as “morality” is confusing the issue.
Often I do, particularly when talking about moralities other than human.
Other times I will make what I consider a relevant functional distinction between general values and moral values in humans—moral values involves approval/disapproval in a recursive way—punishing an attitude, punishing failure to punish an attitude, punishing a failure to punish the failure to punish an attitude. etc. And similarly for rewards.
Elaborating on your point about supporting the continued existence of those who support your morality, on purely consequentialist grounds, Clippy is likely to engage in this kind of recursive punishment/reward as well, so he wouldn’t be entirely a Lovecraftian horror until the power of humans became inconsequential to his ends.
Likely Clippy will be “concerned” about the well being of people to the extent that such well being concerns the making of paperclips. Does that make him moral, according to your usage?
What kind of concern counts, and for which people must it apply? Concern is a very broad concept, with the alternative being unconcerned or indifferent. Clippy isn’t likely to be indifferent, and neither are sociopaths or baby eaters. Sociopaths and babyeaters are likely concerned about their own wellbeing, at least.
I don’t deny that it may be useful to create some sort of creature with nonhuman values for instrumental reasons. We do that all the time now. For instance, we create domestic cats, which if they are neurologically advanced enough to have values at all, are probably rather sociopathic; because of their instrumental value as companions and mousers.
Sorry, I should have been clearer. In order to count as morality the concern for the wellbeing of others has to be a terminal value. In Clippy’s case his concern is only an instrumental value, so he isn’t moral.