Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
Note that the above sentence implicitly uses deontological reasoning.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain?
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct. [emphasis mine]
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.
Reread that sentence. Notice how the second half seems to contradict the first.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
Note that the above sentence implicitly uses deontological reasoning.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.