I haven’t watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there’s a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist:
The Prime Directive is not just a set of rules. It is a philosophy, and a very correct one. History has proven again and again that whenever mankind interferes with a less developed civilization, no matter how well intentioned that interference may be, the results are invariably disastrous.
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed.
Compared to what?
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
I’ve generally considered the Prime Directive moral cowardice dressed up in self righteousness.
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
Compared to how things were before the intervention. And no, things usually weren’t “bright and shiny” before either, but it is possible for shitty situations to get even shittier.
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard’s quote.
The justification for the Prime Directive that has appeared in multiple episodes I’ve watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully “develop” past the stages of evil and become enlightened societies. I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species “evolve naturally” to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not.
The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, “Hey, this slavery thing is not so hot, it really doesn’t work out well,” and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star’s collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet’s preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to “interfere” with a culture’s development, such that this becomes the first and most important rule of Starfleet, to be utterly preposterous and morally repugnant, as well as a hilarious injustice to individuals in the name of judging them based solely on their group membership.
Of course, I acknowledge that it’s not “about” that, in a way. The real purpose of the Prime Directive is for Gene Roddenberry to aggressively signal how much he disagrees with imperialism which historically occurred on Earth, but it makes no in-fiction sense, given their supposedly advanced levels of moral development and superior anthropological knowledge.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
Note that the above sentence implicitly uses deontological reasoning.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain?
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct. [emphasis mine]
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.
I haven’t watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there’s a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist:
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
Compared to what?
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
I’ve generally considered the Prime Directive moral cowardice dressed up in self righteousness.
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
Compared to how things were before the intervention. And no, things usually weren’t “bright and shiny” before either, but it is possible for shitty situations to get even shittier.
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard’s quote.
The justification for the Prime Directive that has appeared in multiple episodes I’ve watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully “develop” past the stages of evil and become enlightened societies. I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species “evolve naturally” to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not.
The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, “Hey, this slavery thing is not so hot, it really doesn’t work out well,” and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star’s collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet’s preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to “interfere” with a culture’s development, such that this becomes the first and most important rule of Starfleet, to be utterly preposterous and morally repugnant, as well as a hilarious injustice to individuals in the name of judging them based solely on their group membership.
Of course, I acknowledge that it’s not “about” that, in a way. The real purpose of the Prime Directive is for Gene Roddenberry to aggressively signal how much he disagrees with imperialism which historically occurred on Earth, but it makes no in-fiction sense, given their supposedly advanced levels of moral development and superior anthropological knowledge.
Reread that sentence. Notice how the second half seems to contradict the first.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
Note that the above sentence implicitly uses deontological reasoning.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.