Doesn’t “sufficiently close relation” also apply with some strength to any being of the same species? Consider a species A is splitting into two subspecies A1 and A2. This could be due to members of A1 preferring to save other members of A1. Once A2 dies, A1 retains the trait of wanting to save other members of A1.
Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.
I would be interested in knowing the Least Convenient World stipulations, and what this phrase means.
Precedents and perverse incentives can be ruled out by assuming none exist, right? Assume in the hypothetical that nobody will ever get to know what choice you made after you made it.
But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it’s a hypothetical, the most relevant incentives and consequences are those for the social situation.
I didn’t get how a hypothetical with two clear choices could be a false dichotomy. Assume that refusing to choose results in something far worse than either choice.
Far worse for whom? In what way? Consequentialism isn’t utilitarianism. If you’re taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn’t utilitarianism: you can choose what’s best for you, personally, and what’s best for me depends heavily on the details.
I agree but that in my mind seems like a lot like—their feelings and values are wired deontologically, their rational brain (incorrectly) thinks they are consequentialists, and they’re finding justifications for their thoughts. Unless ofcourse they find a really good justification. (And even if they did find one, I’d be suspicious of whether the justifcation came after the feeling or action, …. or before.)
But that’s you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.
My experience of being asked a utilitarian hypothetical is, “what am I going to get out of answering this stupid hypothetical?” And mostly the answer is, “nothing good”. So I’m going to attack the premise right away. It’s got zero to do with killing or not killing: my answer to the generalized question of “is it ever a good thing to kill somebody to save somebody else” is sure, of course, and that can be true even at 1:1 trade of lives.
Hell, it can be a good thing to kill somebody even if it’s not saving any lives. The more important ethical question in my mind is consent, because it’s a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what’s actually going on.
And even then, that’s not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that’s not because I have a deontological rule saying “never do that”, but because I’m reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I’m saving are myself and my wife and the person being killed is somebody attacking us, then I’m much less likely to have an issue with using lethal force.
Based on a glance at the paper you referenced, though, I’m going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I’m not 100% sure you can’t have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.
At the very least, what the paper is specifically saying is that people don’t like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious… and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent’s utility function, and it’s perfectly valid for an individual’s utility function to privilege friends and family.
The question isn’t can’t I, but why should I? The LCPW is a tool for strengthening an argument against something, it’s not something that requires a person to accept or answer arbitrary hypotheticals.
As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.
Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.
See The Least Convenient Possible World for where the term was introduced.
But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it’s a hypothetical, the most relevant incentives and consequences are those for the social situation.
Far worse for whom? In what way? Consequentialism isn’t utilitarianism. If you’re taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn’t utilitarianism: you can choose what’s best for you, personally, and what’s best for me depends heavily on the details.
But that’s you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.
My experience of being asked a utilitarian hypothetical is, “what am I going to get out of answering this stupid hypothetical?” And mostly the answer is, “nothing good”. So I’m going to attack the premise right away. It’s got zero to do with killing or not killing: my answer to the generalized question of “is it ever a good thing to kill somebody to save somebody else” is sure, of course, and that can be true even at 1:1 trade of lives.
Hell, it can be a good thing to kill somebody even if it’s not saving any lives. The more important ethical question in my mind is consent, because it’s a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what’s actually going on.
And even then, that’s not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that’s not because I have a deontological rule saying “never do that”, but because I’m reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I’m saving are myself and my wife and the person being killed is somebody attacking us, then I’m much less likely to have an issue with using lethal force.
Based on a glance at the paper you referenced, though, I’m going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I’m not 100% sure you can’t have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.
At the very least, what the paper is specifically saying is that people don’t like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious… and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent’s utility function, and it’s perfectly valid for an individual’s utility function to privilege friends and family.
The question isn’t can’t I, but why should I? The LCPW is a tool for strengthening an argument against something, it’s not something that requires a person to accept or answer arbitrary hypotheticals.
As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.