Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me. I just sit there thinking, “Wow, they should definitely put a bullet in that guy’s head ASAP,” interleaved with, “Wait, what’s the big deal, I don’t see anyone getting hurt here,” depending on the genre. Watch Star Trek TNG episodes with this in mind and you will quickly think that they are simultaneously completely incompetent and morally monstrous (the Prime Directive is one of the most evil rules imaginable).
Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me.
Try being sympathetic to egoist thought and watching the movies. While I enjoy “It’s a Wonderful Life” and “The Philadelphia Story”, I consider the morality monstrous.
“Wow, they should definitely put a bullet in that guy’s head ASAP,”
Yes, and in the magic fictional universe, not blowing him away when you had the chance miraculously turns out for the good, instead of getting everyone killed.
I found the Prime Directive to be one of the hardest lessons in consequentialism. If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows. And they coordinate the slaughter using modern tech, too. And the radicalization of Islam has been caused in part by the Western ideas of decompartmentalization. Exploiting poorer nations and depleting their natural resources doesn’t help much, either. The so-called foreign aid does more harm than good, too. If only Europeans had enough sense to refrain from saving the savages until they are ready.
As I said in another comment in this thread, we know that the real-world reason the Prime Directive exists is because Gene Roddenberry hated historical European imperialism. I grant that the Prime Directive may be a handy rule of thumb given imperfect knowledge and the in-universe history of interference. My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives. It would be like if Europeans knew Africa was going to sink into the ocean in one year and weren’t even willing to tell the Africans it was going to happen (and then patting themselves on the back for being so enlightened). And this becomes the core founding principle of the Federation.
My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives.
I don’t think you interpret the Prime Directive the way Gene Roddenberry did. The directive says that you don’t meddle in the affairs of other cultures just because they act in a way that seems wrong to you (incidentally, that’s why I am unimpressed with the reactions of all 3 species in Three Worlds Collide: all 3 are overly Yudkowskian in their interpretation of morality as objective). It does not say that you should not attempt to save them from a certain extinction or disaster, and there are several episodes where our brave heroes do just that. All the while trying to minimize their influence on the said cultures otherwise, admittedly with mixed results.
See the episode Pen Pals. The population is going to be destroyed by a geological collapse, and Picard decides that the Prime Directive requires they let everyone there die. Of course, by sheer luck they hear a girl call for help to Data while they are debating the issue, which Picard determines is a “plea for help” so doesn’t violate the Prime Directive if they respond. But without that plea, they were going to let everyone die (even though they had the technological capability to save the world without anyone knowing they intervened). I believe this episode had the most protracted discussion of the Prirme Directive that we have seen in-fiction. In Homeward Picard considers it a grave violation of the Prime Directive that Worf’s brother has attempted to save a population when everyone on their planet was going to die in 38 hours.
OK, you have a point, sometimes it does not mean what I thought it did. If you look at the general description of it, however, there are 8 items there, only one of them (“Helping a society escape a natural disaster known to the society, even if inaction would result in a society’s extinction.”) of the questionable type you describe. The original statement, “no identification of self or mission; no interference with the social development of said planet; no references to space, other worlds, or advanced civilizations.” also makes perfect sense.
If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows.
I am still not convinced that in the parallel reality the life would be better. Why exactly is being killed by a gun worse than being killed by a spear?
In many cases, those civilizations were knocked back to “savage level” by dehumanizing levels of exploitation, colonization, and sheer deliberate destruction by Europeans in the first place.
This doesn’t excuse the behavior of post-colonialist Third World countries, except in the sense that one who creates a power vacuum may bear some responsibility for whoever fills it.
Maybe I was unclear. It seems that you and I agree that the Prime Directive would be a good default deontological rule when dealing with less advanced societies.
“Wow, they should definitely put a bullet in that guy’s head ASAP,”
Consider the comparable real life situation. LessWrong has a policy against listing real life examples, so I won’t, but you should be able to think of some. While we’re at it, think about the reason LW has this policy.
“Wait, what’s the big deal, I don’t see anyone getting hurt here,”
You mean you don’t see anyone getting immediately hurt. With the kind of civilization affecting decisions that occur on star trek frequently have indirect effects that are orders of magnitude larger than their direct effects.
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn’t work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning), but tries to retain the moral ambiguity. I think I would be much happier if police were perfect virtue ethicists or deontological reasoners than is currently the case, but if Blofeld reveals his dastardly plans to Bond, I want as many bullets in his head as can be arranged in short order.
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn’t work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning)
To a certain extent this is true due to narrative requirement; however, to a certain extent it’s a realistic portrayal of what our certain states of knowledge can feel like from the inside.
Edit: Also this helps reduce the amount of memetic hazards in fiction.
I haven’t watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there’s a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist:
The Prime Directive is not just a set of rules. It is a philosophy, and a very correct one. History has proven again and again that whenever mankind interferes with a less developed civilization, no matter how well intentioned that interference may be, the results are invariably disastrous.
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed.
Compared to what?
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
I’ve generally considered the Prime Directive moral cowardice dressed up in self righteousness.
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
Compared to how things were before the intervention. And no, things usually weren’t “bright and shiny” before either, but it is possible for shitty situations to get even shittier.
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard’s quote.
The justification for the Prime Directive that has appeared in multiple episodes I’ve watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully “develop” past the stages of evil and become enlightened societies. I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species “evolve naturally” to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not.
The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, “Hey, this slavery thing is not so hot, it really doesn’t work out well,” and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star’s collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet’s preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to “interfere” with a culture’s development, such that this becomes the first and most important rule of Starfleet, to be utterly preposterous and morally repugnant, as well as a hilarious injustice to individuals in the name of judging them based solely on their group membership.
Of course, I acknowledge that it’s not “about” that, in a way. The real purpose of the Prime Directive is for Gene Roddenberry to aggressively signal how much he disagrees with imperialism which historically occurred on Earth, but it makes no in-fiction sense, given their supposedly advanced levels of moral development and superior anthropological knowledge.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
Note that the above sentence implicitly uses deontological reasoning.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain?
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct. [emphasis mine]
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.
Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me. I just sit there thinking, “Wow, they should definitely put a bullet in that guy’s head ASAP,” interleaved with, “Wait, what’s the big deal, I don’t see anyone getting hurt here,” depending on the genre. Watch Star Trek TNG episodes with this in mind and you will quickly think that they are simultaneously completely incompetent and morally monstrous (the Prime Directive is one of the most evil rules imaginable).
Try being sympathetic to egoist thought and watching the movies. While I enjoy “It’s a Wonderful Life” and “The Philadelphia Story”, I consider the morality monstrous.
Yes, and in the magic fictional universe, not blowing him away when you had the chance miraculously turns out for the good, instead of getting everyone killed.
I found the Prime Directive to be one of the hardest lessons in consequentialism. If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows. And they coordinate the slaughter using modern tech, too. And the radicalization of Islam has been caused in part by the Western ideas of decompartmentalization. Exploiting poorer nations and depleting their natural resources doesn’t help much, either. The so-called foreign aid does more harm than good, too. If only Europeans had enough sense to refrain from saving the savages until they are ready.
As I said in another comment in this thread, we know that the real-world reason the Prime Directive exists is because Gene Roddenberry hated historical European imperialism. I grant that the Prime Directive may be a handy rule of thumb given imperfect knowledge and the in-universe history of interference. My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives. It would be like if Europeans knew Africa was going to sink into the ocean in one year and weren’t even willing to tell the Africans it was going to happen (and then patting themselves on the back for being so enlightened). And this becomes the core founding principle of the Federation.
I don’t think you interpret the Prime Directive the way Gene Roddenberry did. The directive says that you don’t meddle in the affairs of other cultures just because they act in a way that seems wrong to you (incidentally, that’s why I am unimpressed with the reactions of all 3 species in Three Worlds Collide: all 3 are overly Yudkowskian in their interpretation of morality as objective). It does not say that you should not attempt to save them from a certain extinction or disaster, and there are several episodes where our brave heroes do just that. All the while trying to minimize their influence on the said cultures otherwise, admittedly with mixed results.
See the episode Pen Pals. The population is going to be destroyed by a geological collapse, and Picard decides that the Prime Directive requires they let everyone there die. Of course, by sheer luck they hear a girl call for help to Data while they are debating the issue, which Picard determines is a “plea for help” so doesn’t violate the Prime Directive if they respond. But without that plea, they were going to let everyone die (even though they had the technological capability to save the world without anyone knowing they intervened). I believe this episode had the most protracted discussion of the Prirme Directive that we have seen in-fiction. In Homeward Picard considers it a grave violation of the Prime Directive that Worf’s brother has attempted to save a population when everyone on their planet was going to die in 38 hours.
OK, you have a point, sometimes it does not mean what I thought it did. If you look at the general description of it, however, there are 8 items there, only one of them (“Helping a society escape a natural disaster known to the society, even if inaction would result in a society’s extinction.”) of the questionable type you describe. The original statement, “no identification of self or mission; no interference with the social development of said planet; no references to space, other worlds, or advanced civilizations.” also makes perfect sense.
I am still not convinced that in the parallel reality the life would be better. Why exactly is being killed by a gun worse than being killed by a spear?
In many cases, those civilizations were knocked back to “savage level” by dehumanizing levels of exploitation, colonization, and sheer deliberate destruction by Europeans in the first place.
This doesn’t excuse the behavior of post-colonialist Third World countries, except in the sense that one who creates a power vacuum may bear some responsibility for whoever fills it.
Maybe I was unclear. It seems that you and I agree that the Prime Directive would be a good default deontological rule when dealing with less advanced societies.
Consider the comparable real life situation. LessWrong has a policy against listing real life examples, so I won’t, but you should be able to think of some. While we’re at it, think about the reason LW has this policy.
You mean you don’t see anyone getting immediately hurt. With the kind of civilization affecting decisions that occur on star trek frequently have indirect effects that are orders of magnitude larger than their direct effects.
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn’t work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning), but tries to retain the moral ambiguity. I think I would be much happier if police were perfect virtue ethicists or deontological reasoners than is currently the case, but if Blofeld reveals his dastardly plans to Bond, I want as many bullets in his head as can be arranged in short order.
To a certain extent this is true due to narrative requirement; however, to a certain extent it’s a realistic portrayal of what our certain states of knowledge can feel like from the inside.
Edit: Also this helps reduce the amount of memetic hazards in fiction.
I haven’t watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there’s a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist:
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
Compared to what?
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
I’ve generally considered the Prime Directive moral cowardice dressed up in self righteousness.
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
Compared to how things were before the intervention. And no, things usually weren’t “bright and shiny” before either, but it is possible for shitty situations to get even shittier.
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard’s quote.
The justification for the Prime Directive that has appeared in multiple episodes I’ve watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully “develop” past the stages of evil and become enlightened societies. I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species “evolve naturally” to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not.
The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, “Hey, this slavery thing is not so hot, it really doesn’t work out well,” and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star’s collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet’s preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to “interfere” with a culture’s development, such that this becomes the first and most important rule of Starfleet, to be utterly preposterous and morally repugnant, as well as a hilarious injustice to individuals in the name of judging them based solely on their group membership.
Of course, I acknowledge that it’s not “about” that, in a way. The real purpose of the Prime Directive is for Gene Roddenberry to aggressively signal how much he disagrees with imperialism which historically occurred on Earth, but it makes no in-fiction sense, given their supposedly advanced levels of moral development and superior anthropological knowledge.
Reread that sentence. Notice how the second half seems to contradict the first.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
Note that the above sentence implicitly uses deontological reasoning.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.