An example from fiction. In the Dark Knight, Batman refuses to kill the Joker. From a consequential point of view, it would save many more lives if Batman just killed the damn Joker. He refuses to do this because it would make him a Killer, and he doesn’t want that. Yet, intuitively, we view Batman as virtuous for not killing him.
One could also give this a deontological interpretation: Batman strictly follows “Thou shall not kill”. I think, in general, that deontology and virtue ethics have a lot in common: if you follow deontology, you’re becoming a person Who Follows A Code, and this viewed as virtuous. Thus, in fiction, we feel sympathetic to criminals who have a code. Examples: every heist movie ever.
Interestingly, we view law-enforcers as good as long as they stick to the deontology. Another example from fiction, in the movie Untouchables, Eliot Ness (Kevin Costner) unleashes a huge fury of death and destruction by targeting mobs who violate the Prohibition. At the end, just as he catches Al Capone, the Prohibition is repealed and he is asked by a journalist what he’s going to do now. He replies: “I’ll get a drink.”
Instinctively, I felt that he was a Good Guy. So, clearly he didn’t see any intrinsic value to the Prohibition, but he still went about defending it because it was The Law. And people who uphold the The Law are the good guys.
Interestingly, if your Code is Consequentialism, then you don’t get much sympathy.
Yet, intuitively, we view Batman as virtuous for not killing him.
I don’t.
I’m frequently annoyed with supposed “good guys” letting the psychopathic super baddy live, taking their neck off their throats, only to lose many more lives and have to stop the bad guy again and again. I don’t view them as virtuous, I view them as holding the idiot ball to keep the narrative going. It’s like a bad guy stroking a white cat who sends James Bond off to die some elaborate ceremonial death, instead of clubbing him unconscious, putting a few rounds in his head, and having him rolled up in the carpet and thrown out.
Note that the storyline often allows the hero to have his “virtue” and execution too, as the bad guy will often overpower the idiot security forces holding him to pull a gun and shoot at the hero, allowing the hero to return fire in self defense. How transparent and tiresome. Generally “moral dilemmas” in movies are just this kind of dishonest exercise in having your cake and eating it too. How I long for a starship to explode when the Captain ignores the engineer and says “crank it to 11”, or see some bozo snuffed out the moment he says “never tell me the odds”.
Bond actually refused to play that game in Goldeneye.
[Bond is holding Trevelyan by his foot on top of the satellite antenna.] Trevelyan: For England, James? Bond: No. For me. [lets Trevelyan fall to his death]
Boy, are you ever on the right website. As far as I can tell, this place is basically a conspiracy full of Dangerously Genre Savvy people trying to get good things done in real life through the use of our Dangerous Genre Savvy.
Now if you’ll excuse me, I need to go find a white dog to pet. I’m allergic to cats.
Pretty much, yes. The whole difference between Genre and Genre Savvy is that a Genre Savvy viewer recognizes what would actually happen in real life, whereas fictional characters not only don’t recognize that, their whole universe functions in a different, less logical way.
In fiction, refusing to shoot Osama bin Laden means he ends up serving time in jail, and justice is served.
In real life, refusing to shoot Osama bin Laden means he tells his followers he has enjoyed a Glorious Victory Against the Western Kuffar Cowards (don’t laugh: this is what fascist movements actually believe), which spurs them to a new wave of violence.
In fiction, refusing to shoot Osama bin Laden means he ends up serving time in jail, and justice is served.
Depends on the genre. Sometimes it means he waits until your back is turned and tries to kill you, thereby allowing you to kill him to defend yourself. Sometimes it means he goes free and mocks you and then dies of a heart attack. Sometimes it means he goes free and his mocking laughter is heard over the credits.
I think the genre I’m railing against is Dishonest Moral Propaganda. That’s what irks—they’re using lies to make a case for some nitwit ideology or behavior.
Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me. I just sit there thinking, “Wow, they should definitely put a bullet in that guy’s head ASAP,” interleaved with, “Wait, what’s the big deal, I don’t see anyone getting hurt here,” depending on the genre. Watch Star Trek TNG episodes with this in mind and you will quickly think that they are simultaneously completely incompetent and morally monstrous (the Prime Directive is one of the most evil rules imaginable).
Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me.
Try being sympathetic to egoist thought and watching the movies. While I enjoy “It’s a Wonderful Life” and “The Philadelphia Story”, I consider the morality monstrous.
“Wow, they should definitely put a bullet in that guy’s head ASAP,”
Yes, and in the magic fictional universe, not blowing him away when you had the chance miraculously turns out for the good, instead of getting everyone killed.
I found the Prime Directive to be one of the hardest lessons in consequentialism. If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows. And they coordinate the slaughter using modern tech, too. And the radicalization of Islam has been caused in part by the Western ideas of decompartmentalization. Exploiting poorer nations and depleting their natural resources doesn’t help much, either. The so-called foreign aid does more harm than good, too. If only Europeans had enough sense to refrain from saving the savages until they are ready.
As I said in another comment in this thread, we know that the real-world reason the Prime Directive exists is because Gene Roddenberry hated historical European imperialism. I grant that the Prime Directive may be a handy rule of thumb given imperfect knowledge and the in-universe history of interference. My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives. It would be like if Europeans knew Africa was going to sink into the ocean in one year and weren’t even willing to tell the Africans it was going to happen (and then patting themselves on the back for being so enlightened). And this becomes the core founding principle of the Federation.
My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives.
I don’t think you interpret the Prime Directive the way Gene Roddenberry did. The directive says that you don’t meddle in the affairs of other cultures just because they act in a way that seems wrong to you (incidentally, that’s why I am unimpressed with the reactions of all 3 species in Three Worlds Collide: all 3 are overly Yudkowskian in their interpretation of morality as objective). It does not say that you should not attempt to save them from a certain extinction or disaster, and there are several episodes where our brave heroes do just that. All the while trying to minimize their influence on the said cultures otherwise, admittedly with mixed results.
See the episode Pen Pals. The population is going to be destroyed by a geological collapse, and Picard decides that the Prime Directive requires they let everyone there die. Of course, by sheer luck they hear a girl call for help to Data while they are debating the issue, which Picard determines is a “plea for help” so doesn’t violate the Prime Directive if they respond. But without that plea, they were going to let everyone die (even though they had the technological capability to save the world without anyone knowing they intervened). I believe this episode had the most protracted discussion of the Prirme Directive that we have seen in-fiction. In Homeward Picard considers it a grave violation of the Prime Directive that Worf’s brother has attempted to save a population when everyone on their planet was going to die in 38 hours.
OK, you have a point, sometimes it does not mean what I thought it did. If you look at the general description of it, however, there are 8 items there, only one of them (“Helping a society escape a natural disaster known to the society, even if inaction would result in a society’s extinction.”) of the questionable type you describe. The original statement, “no identification of self or mission; no interference with the social development of said planet; no references to space, other worlds, or advanced civilizations.” also makes perfect sense.
If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows.
I am still not convinced that in the parallel reality the life would be better. Why exactly is being killed by a gun worse than being killed by a spear?
In many cases, those civilizations were knocked back to “savage level” by dehumanizing levels of exploitation, colonization, and sheer deliberate destruction by Europeans in the first place.
This doesn’t excuse the behavior of post-colonialist Third World countries, except in the sense that one who creates a power vacuum may bear some responsibility for whoever fills it.
Maybe I was unclear. It seems that you and I agree that the Prime Directive would be a good default deontological rule when dealing with less advanced societies.
“Wow, they should definitely put a bullet in that guy’s head ASAP,”
Consider the comparable real life situation. LessWrong has a policy against listing real life examples, so I won’t, but you should be able to think of some. While we’re at it, think about the reason LW has this policy.
“Wait, what’s the big deal, I don’t see anyone getting hurt here,”
You mean you don’t see anyone getting immediately hurt. With the kind of civilization affecting decisions that occur on star trek frequently have indirect effects that are orders of magnitude larger than their direct effects.
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn’t work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning), but tries to retain the moral ambiguity. I think I would be much happier if police were perfect virtue ethicists or deontological reasoners than is currently the case, but if Blofeld reveals his dastardly plans to Bond, I want as many bullets in his head as can be arranged in short order.
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn’t work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning)
To a certain extent this is true due to narrative requirement; however, to a certain extent it’s a realistic portrayal of what our certain states of knowledge can feel like from the inside.
Edit: Also this helps reduce the amount of memetic hazards in fiction.
I haven’t watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there’s a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist:
The Prime Directive is not just a set of rules. It is a philosophy, and a very correct one. History has proven again and again that whenever mankind interferes with a less developed civilization, no matter how well intentioned that interference may be, the results are invariably disastrous.
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed.
Compared to what?
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
I’ve generally considered the Prime Directive moral cowardice dressed up in self righteousness.
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
Compared to how things were before the intervention. And no, things usually weren’t “bright and shiny” before either, but it is possible for shitty situations to get even shittier.
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard’s quote.
The justification for the Prime Directive that has appeared in multiple episodes I’ve watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully “develop” past the stages of evil and become enlightened societies. I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species “evolve naturally” to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not.
The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, “Hey, this slavery thing is not so hot, it really doesn’t work out well,” and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star’s collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet’s preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to “interfere” with a culture’s development, such that this becomes the first and most important rule of Starfleet, to be utterly preposterous and morally repugnant, as well as a hilarious injustice to individuals in the name of judging them based solely on their group membership.
Of course, I acknowledge that it’s not “about” that, in a way. The real purpose of the Prime Directive is for Gene Roddenberry to aggressively signal how much he disagrees with imperialism which historically occurred on Earth, but it makes no in-fiction sense, given their supposedly advanced levels of moral development and superior anthropological knowledge.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
Note that the above sentence implicitly uses deontological reasoning.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric.
Reread that sentence. Notice how the second half seems to contradict the first.
Perhaps you could explain?
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct. [emphasis mine]
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.
From a consequential point of view, it would save many more lives if Batman just killed the damn Joker. He refuses to do this because it would make him a Killer, and he doesn’t want that.
Nitpicking time. I’m not so sure that that’s the reason. He’s also playing a long game in which Batman is supposed to be a symbol of what is possible. This reasoning produces actions that have short-term potential problems but causes many others to do better over a long time period.
Imagine what the police would imagine if they followed the popular conception of what consequentialism is. That’s an expected consequence of police action, so if it’s worse than what they’re doing now, they won’t choose to do it (under a sufficiently savvy model of consequentialism.)
“Thou shall not kill” is actually nothing more that a consequentialist heuristic posing as deontological/virtue ethics.
If Batman kills Joker in lieu of a trial, he is a de facto “good guy” authority setting a precedent for such eye-for-eye behavior throughout all of Gotham. That is a potentially powerful meme given Batman’s status and could reasonably lead to a norm of ruthless, draconian law enforcement methods for decades to come. There are meta-consequentialist considerations at play.
Killing Joker means, in some sense, Batman had to agree that Joker’s ethics—killing your enemy to advance your ends—work.
Of course there are times where killing, stealing, lying are consequentially a net positive, but it is very useful to have deontological norms prohibiting those actions and ascribe virtues to those people who follow the rules. It is, in fact, the best consequentialist policy over time.
Killing Joker means, in some sense, Batman had to agree that Joker’s ethics—killing your enemy to advance your ends—work.
At least in The Dark Knight, the Joker was an outright nihilist. His primary goal was simply to prove that everyone is as crazy as him underneath.
Mind, the whole supposed Moral Dilemma about Society on the Brink of Collapse should anyone ever See Through the Noble Lie and realize that the Joker Was Right and there really is just Nothing… well, it kinda goes away once you confront the abyss yourself and realize that, given a blank canvas, you’d prefer to paint a pretty picture than burn the building down.
(Or in other words, the Joker presumed to prove that people must be Nihilists like him underneath, without considering whether the result might not be a heavily-armed batch of Existentialists.)
Spoiler Alerts
An example from fiction. In the Dark Knight, Batman refuses to kill the Joker. From a consequential point of view, it would save many more lives if Batman just killed the damn Joker. He refuses to do this because it would make him a Killer, and he doesn’t want that. Yet, intuitively, we view Batman as virtuous for not killing him.
One could also give this a deontological interpretation: Batman strictly follows “Thou shall not kill”. I think, in general, that deontology and virtue ethics have a lot in common: if you follow deontology, you’re becoming a person Who Follows A Code, and this viewed as virtuous. Thus, in fiction, we feel sympathetic to criminals who have a code. Examples: every heist movie ever.
Interestingly, we view law-enforcers as good as long as they stick to the deontology. Another example from fiction, in the movie Untouchables, Eliot Ness (Kevin Costner) unleashes a huge fury of death and destruction by targeting mobs who violate the Prohibition. At the end, just as he catches Al Capone, the Prohibition is repealed and he is asked by a journalist what he’s going to do now. He replies: “I’ll get a drink.”
Instinctively, I felt that he was a Good Guy. So, clearly he didn’t see any intrinsic value to the Prohibition, but he still went about defending it because it was The Law. And people who uphold the The Law are the good guys.
Interestingly, if your Code is Consequentialism, then you don’t get much sympathy.
I don’t.
I’m frequently annoyed with supposed “good guys” letting the psychopathic super baddy live, taking their neck off their throats, only to lose many more lives and have to stop the bad guy again and again. I don’t view them as virtuous, I view them as holding the idiot ball to keep the narrative going. It’s like a bad guy stroking a white cat who sends James Bond off to die some elaborate ceremonial death, instead of clubbing him unconscious, putting a few rounds in his head, and having him rolled up in the carpet and thrown out.
Note that the storyline often allows the hero to have his “virtue” and execution too, as the bad guy will often overpower the idiot security forces holding him to pull a gun and shoot at the hero, allowing the hero to return fire in self defense. How transparent and tiresome. Generally “moral dilemmas” in movies are just this kind of dishonest exercise in having your cake and eating it too. How I long for a starship to explode when the Captain ignores the engineer and says “crank it to 11”, or see some bozo snuffed out the moment he says “never tell me the odds”.
Bond actually refused to play that game in Goldeneye.
Boy, are you ever on the right website. As far as I can tell, this place is basically a conspiracy full of Dangerously Genre Savvy people trying to get good things done in real life through the use of our Dangerous Genre Savvy.
Now if you’ll excuse me, I need to go find a white dog to pet. I’m allergic to cats.
That Genre being ‘things that actually happen’, which would be a very niche genre in fiction?
Pretty much, yes. The whole difference between Genre and Genre Savvy is that a Genre Savvy viewer recognizes what would actually happen in real life, whereas fictional characters not only don’t recognize that, their whole universe functions in a different, less logical way.
In fiction, refusing to shoot Osama bin Laden means he ends up serving time in jail, and justice is served.
In real life, refusing to shoot Osama bin Laden means he tells his followers he has enjoyed a Glorious Victory Against the Western Kuffar Cowards (don’t laugh: this is what fascist movements actually believe), which spurs them to a new wave of violence.
Depends on the genre. Sometimes it means he waits until your back is turned and tries to kill you, thereby allowing you to kill him to defend yourself. Sometimes it means he goes free and mocks you and then dies of a heart attack. Sometimes it means he goes free and his mocking laughter is heard over the credits.
More importantly, he gets to return for the sequel.
I think the genre I’m railing against is Dishonest Moral Propaganda. That’s what irks—they’re using lies to make a case for some nitwit ideology or behavior.
You didn’t even mention ‘genre’. I was just trying to figure out how eli was characterizing us here.
Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me. I just sit there thinking, “Wow, they should definitely put a bullet in that guy’s head ASAP,” interleaved with, “Wait, what’s the big deal, I don’t see anyone getting hurt here,” depending on the genre. Watch Star Trek TNG episodes with this in mind and you will quickly think that they are simultaneously completely incompetent and morally monstrous (the Prime Directive is one of the most evil rules imaginable).
Try being sympathetic to egoist thought and watching the movies. While I enjoy “It’s a Wonderful Life” and “The Philadelphia Story”, I consider the morality monstrous.
Yes, and in the magic fictional universe, not blowing him away when you had the chance miraculously turns out for the good, instead of getting everyone killed.
I found the Prime Directive to be one of the hardest lessons in consequentialism. If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows. And they coordinate the slaughter using modern tech, too. And the radicalization of Islam has been caused in part by the Western ideas of decompartmentalization. Exploiting poorer nations and depleting their natural resources doesn’t help much, either. The so-called foreign aid does more harm than good, too. If only Europeans had enough sense to refrain from saving the savages until they are ready.
As I said in another comment in this thread, we know that the real-world reason the Prime Directive exists is because Gene Roddenberry hated historical European imperialism. I grant that the Prime Directive may be a handy rule of thumb given imperfect knowledge and the in-universe history of interference. My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives. It would be like if Europeans knew Africa was going to sink into the ocean in one year and weren’t even willing to tell the Africans it was going to happen (and then patting themselves on the back for being so enlightened). And this becomes the core founding principle of the Federation.
I don’t think you interpret the Prime Directive the way Gene Roddenberry did. The directive says that you don’t meddle in the affairs of other cultures just because they act in a way that seems wrong to you (incidentally, that’s why I am unimpressed with the reactions of all 3 species in Three Worlds Collide: all 3 are overly Yudkowskian in their interpretation of morality as objective). It does not say that you should not attempt to save them from a certain extinction or disaster, and there are several episodes where our brave heroes do just that. All the while trying to minimize their influence on the said cultures otherwise, admittedly with mixed results.
See the episode Pen Pals. The population is going to be destroyed by a geological collapse, and Picard decides that the Prime Directive requires they let everyone there die. Of course, by sheer luck they hear a girl call for help to Data while they are debating the issue, which Picard determines is a “plea for help” so doesn’t violate the Prime Directive if they respond. But without that plea, they were going to let everyone die (even though they had the technological capability to save the world without anyone knowing they intervened). I believe this episode had the most protracted discussion of the Prirme Directive that we have seen in-fiction. In Homeward Picard considers it a grave violation of the Prime Directive that Worf’s brother has attempted to save a population when everyone on their planet was going to die in 38 hours.
OK, you have a point, sometimes it does not mean what I thought it did. If you look at the general description of it, however, there are 8 items there, only one of them (“Helping a society escape a natural disaster known to the society, even if inaction would result in a society’s extinction.”) of the questionable type you describe. The original statement, “no identification of self or mission; no interference with the social development of said planet; no references to space, other worlds, or advanced civilizations.” also makes perfect sense.
I am still not convinced that in the parallel reality the life would be better. Why exactly is being killed by a gun worse than being killed by a spear?
In many cases, those civilizations were knocked back to “savage level” by dehumanizing levels of exploitation, colonization, and sheer deliberate destruction by Europeans in the first place.
This doesn’t excuse the behavior of post-colonialist Third World countries, except in the sense that one who creates a power vacuum may bear some responsibility for whoever fills it.
Maybe I was unclear. It seems that you and I agree that the Prime Directive would be a good default deontological rule when dealing with less advanced societies.
Consider the comparable real life situation. LessWrong has a policy against listing real life examples, so I won’t, but you should be able to think of some. While we’re at it, think about the reason LW has this policy.
You mean you don’t see anyone getting immediately hurt. With the kind of civilization affecting decisions that occur on star trek frequently have indirect effects that are orders of magnitude larger than their direct effects.
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn’t work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning), but tries to retain the moral ambiguity. I think I would be much happier if police were perfect virtue ethicists or deontological reasoners than is currently the case, but if Blofeld reveals his dastardly plans to Bond, I want as many bullets in his head as can be arranged in short order.
To a certain extent this is true due to narrative requirement; however, to a certain extent it’s a realistic portrayal of what our certain states of knowledge can feel like from the inside.
Edit: Also this helps reduce the amount of memetic hazards in fiction.
I haven’t watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there’s a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist:
Pretty sure that the results haven’t invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
Compared to what?
Was life all bright and shiny before civilization interjected itself into the existing barbarism?
I’ve generally considered the Prime Directive moral cowardice dressed up in self righteousness.
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
Compared to how things were before the intervention. And no, things usually weren’t “bright and shiny” before either, but it is possible for shitty situations to get even shittier.
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard’s quote.
The justification for the Prime Directive that has appeared in multiple episodes I’ve watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully “develop” past the stages of evil and become enlightened societies. I don’t ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species “evolve naturally” to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not.
The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, “Hey, this slavery thing is not so hot, it really doesn’t work out well,” and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star’s collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet’s preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to “interfere” with a culture’s development, such that this becomes the first and most important rule of Starfleet, to be utterly preposterous and morally repugnant, as well as a hilarious injustice to individuals in the name of judging them based solely on their group membership.
Of course, I acknowledge that it’s not “about” that, in a way. The real purpose of the Prime Directive is for Gene Roddenberry to aggressively signal how much he disagrees with imperialism which historically occurred on Earth, but it makes no in-fiction sense, given their supposedly advanced levels of moral development and superior anthropological knowledge.
Reread that sentence. Notice how the second half seems to contradict the first.
We do? This is not at all obvious. Consider the generic changes in domestic animals, for example.
Note that the above sentence implicitly uses deontological reasoning.
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of “survival of the fittest” individuals within a society, but here I’m referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn’t the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal “advancement” (collective action) is the evil part here.
I will grant that humans are still evolving, because obviously you can’t turn it off in the broader sense. But I haven’t found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it’s clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means.
I don’t think so, but I’m not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can be a net benefit and thus morally correct.
This isn’t a statement about their current ethics, but a statement about what is available to them given their current cognitive abilities. It’s an empirical question whether a person has the ability to understand deontological, consequentialist, or virtue ethics.
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system “barbaric”.
I didn’t say anything about moral weight, largely because I’ve never heard a good explanation of how it is supposed to be assigned. I’m talking about their cognitive abilities, in particular their ability to act sufficiently morally.
That’s deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine.
The reason I said it “can be a net benefit” is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.
Nitpicking time. I’m not so sure that that’s the reason. He’s also playing a long game in which Batman is supposed to be a symbol of what is possible. This reasoning produces actions that have short-term potential problems but causes many others to do better over a long time period.
There is a (meta-consequentialist) reason for this. Imagine what would happen if police were encouraged to act in a consequentialist manner.
Imagine what the police would imagine if they followed the popular conception of what consequentialism is. That’s an expected consequence of police action, so if it’s worse than what they’re doing now, they won’t choose to do it (under a sufficiently savvy model of consequentialism.)
There is a term for this: rule consequentialism.
This is the best response, I think.
“Thou shall not kill” is actually nothing more that a consequentialist heuristic posing as deontological/virtue ethics.
If Batman kills Joker in lieu of a trial, he is a de facto “good guy” authority setting a precedent for such eye-for-eye behavior throughout all of Gotham. That is a potentially powerful meme given Batman’s status and could reasonably lead to a norm of ruthless, draconian law enforcement methods for decades to come. There are meta-consequentialist considerations at play.
Killing Joker means, in some sense, Batman had to agree that Joker’s ethics—killing your enemy to advance your ends—work.
Of course there are times where killing, stealing, lying are consequentially a net positive, but it is very useful to have deontological norms prohibiting those actions and ascribe virtues to those people who follow the rules. It is, in fact, the best consequentialist policy over time.
At least in The Dark Knight, the Joker was an outright nihilist. His primary goal was simply to prove that everyone is as crazy as him underneath.
Mind, the whole supposed Moral Dilemma about Society on the Brink of Collapse should anyone ever See Through the Noble Lie and realize that the Joker Was Right and there really is just Nothing… well, it kinda goes away once you confront the abyss yourself and realize that, given a blank canvas, you’d prefer to paint a pretty picture than burn the building down.
(Or in other words, the Joker presumed to prove that people must be Nihilists like him underneath, without considering whether the result might not be a heavily-armed batch of Existentialists.)