As an individual defecting from a societal norm, consequentialism may increase the chances of the individual surviving. But if consequentalism becomes a societal norm, it may reduce the chances of the collective surviving.
Wait, what? You’re saying that all the individuals survived, but the collective didn’t? That seems to be saying that a particular organizational configuration ceased to exist, but not that everybody died. The phrasing here is ambiguous
If this is true, then societies composed of consequentalists will die faster than societies composed of deontologists. And evolution as a force typically acts on collectives, not individuals.
This just seems confused. Evolution acts on individuals, unless you’re talking about the force of evolution operating (again) on organizational configuration rather than genetics. But societies in such cases often “evolve” by changing rules and structures, not always by collapsing and being replaced.
Section 1 of the paper has a lot of good examples on how consequentalists are more likely to break laws, violate others’ rights and so on—when the consequences justify it. It also talks about and references a number of older papers on how such agents find it harder to form both social and business relationships, to generate trust to solve any coordination problems.
This sounds like naive consequentialism, not LessWrong-style consequentialism. A proper consequentialist decision theory takes into account long-term effects of making certain types of choices, not just the short-term effects of individual choices.
(That is, a proper consequentialist foresees that being the kind of person who breaks agreements for short-term benefits has long-term negative consequences, and so they don’t do that.)
re: second quote, I mean that evolution selects for those traits that ensure collective survival
It really, really doesn’t. It selects for the proliferation of genes that proliferate, which is very, very different.
A trait where “one person is willing to kill 10 others to ensure their own survival” will be less selected for compared to one where “one person is willing to die to save someone else”.
No, it selects for “one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation”. If there were no correlation between the trait and relatedness, the trait would be extinguished.
(And the being willing to kill 10 others isn’t deselected for either, so long as the others are strangers or rivals for resources, mates, etc.)
Selection works on relative frequency of genes, not on groups or individuals. To the extent that we have any sort of group feeling or behaviors at all, this is due to commonality of genes. A gene won’t be universal in a population unless it provides its carriers with some sort of advantage over non-carriers. If there’s no individual advantage (or at least gene-specific advantage), it won’t become universal.
Suppose your friend asks you a situation, purely as hypothetical, as to whether you would murder someone to save two others. You simply answering this question indicating you’re willing to murder reduces trust with your friend.
This sounds less like “consequentialism reduces trust” than “willingness to murder reduces trust” or perhaps “utilitarianism reduces trust”.
Now maybe LessWrong-style consequentialism requires you to lie to your friend, that hasn’t been studied.
I would expect a LW-style consequentialist to reject such a simple framework as “kill one person to save two” without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.
Really, I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.
Doesn’t “sufficiently close relation” also apply with some strength to any being of the same species? Consider a species A is splitting into two subspecies A1 and A2. This could be due to members of A1 preferring to save other members of A1. Once A2 dies, A1 retains the trait of wanting to save other members of A1.
Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.
I would be interested in knowing the Least Convenient World stipulations, and what this phrase means.
Precedents and perverse incentives can be ruled out by assuming none exist, right? Assume in the hypothetical that nobody will ever get to know what choice you made after you made it.
But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it’s a hypothetical, the most relevant incentives and consequences are those for the social situation.
I didn’t get how a hypothetical with two clear choices could be a false dichotomy. Assume that refusing to choose results in something far worse than either choice.
Far worse for whom? In what way? Consequentialism isn’t utilitarianism. If you’re taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn’t utilitarianism: you can choose what’s best for you, personally, and what’s best for me depends heavily on the details.
I agree but that in my mind seems like a lot like—their feelings and values are wired deontologically, their rational brain (incorrectly) thinks they are consequentialists, and they’re finding justifications for their thoughts. Unless ofcourse they find a really good justification. (And even if they did find one, I’d be suspicious of whether the justifcation came after the feeling or action, …. or before.)
But that’s you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.
My experience of being asked a utilitarian hypothetical is, “what am I going to get out of answering this stupid hypothetical?” And mostly the answer is, “nothing good”. So I’m going to attack the premise right away. It’s got zero to do with killing or not killing: my answer to the generalized question of “is it ever a good thing to kill somebody to save somebody else” is sure, of course, and that can be true even at 1:1 trade of lives.
Hell, it can be a good thing to kill somebody even if it’s not saving any lives. The more important ethical question in my mind is consent, because it’s a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what’s actually going on.
And even then, that’s not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that’s not because I have a deontological rule saying “never do that”, but because I’m reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I’m saving are myself and my wife and the person being killed is somebody attacking us, then I’m much less likely to have an issue with using lethal force.
Based on a glance at the paper you referenced, though, I’m going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I’m not 100% sure you can’t have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.
At the very least, what the paper is specifically saying is that people don’t like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious… and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent’s utility function, and it’s perfectly valid for an individual’s utility function to privilege friends and family.
The question isn’t can’t I, but why should I? The LCPW is a tool for strengthening an argument against something, it’s not something that requires a person to accept or answer arbitrary hypotheticals.
As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.
Some of the studies don’t even involve real situations, they’re purely hypothetical.
Studies that are not about real situations by their nature are not good for thinking about real world impacts of ideas. It’s hard enough to get studies that use real situations to replicate in a meaningful way in psychology. There’s no intellectual basis for thinking that you can reliably expolate from studies about hypothetical situations like that to real world behavior.
A philsopher is the kind of person who can switch from a very skeptic position like being unsure whether chairs really exist to believing that he can extrapolate hypnothetical data to make predictions about complex real world interactions in remarkable speed.
Wait, what? You’re saying that all the individuals survived, but the collective didn’t? That seems to be saying that a particular organizational configuration ceased to exist, but not that everybody died. The phrasing here is ambiguous
This just seems confused. Evolution acts on individuals, unless you’re talking about the force of evolution operating (again) on organizational configuration rather than genetics. But societies in such cases often “evolve” by changing rules and structures, not always by collapsing and being replaced.
This sounds like naive consequentialism, not LessWrong-style consequentialism. A proper consequentialist decision theory takes into account long-term effects of making certain types of choices, not just the short-term effects of individual choices.
(That is, a proper consequentialist foresees that being the kind of person who breaks agreements for short-term benefits has long-term negative consequences, and so they don’t do that.)
It really, really doesn’t. It selects for the proliferation of genes that proliferate, which is very, very different.
No, it selects for “one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation”. If there were no correlation between the trait and relatedness, the trait would be extinguished.
(And the being willing to kill 10 others isn’t deselected for either, so long as the others are strangers or rivals for resources, mates, etc.)
Selection works on relative frequency of genes, not on groups or individuals. To the extent that we have any sort of group feeling or behaviors at all, this is due to commonality of genes. A gene won’t be universal in a population unless it provides its carriers with some sort of advantage over non-carriers. If there’s no individual advantage (or at least gene-specific advantage), it won’t become universal.
This sounds less like “consequentialism reduces trust” than “willingness to murder reduces trust” or perhaps “utilitarianism reduces trust”.
I would expect a LW-style consequentialist to reject such a simple framework as “kill one person to save two” without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.
Really, I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.
Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.
See The Least Convenient Possible World for where the term was introduced.
But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it’s a hypothetical, the most relevant incentives and consequences are those for the social situation.
Far worse for whom? In what way? Consequentialism isn’t utilitarianism. If you’re taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn’t utilitarianism: you can choose what’s best for you, personally, and what’s best for me depends heavily on the details.
But that’s you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.
My experience of being asked a utilitarian hypothetical is, “what am I going to get out of answering this stupid hypothetical?” And mostly the answer is, “nothing good”. So I’m going to attack the premise right away. It’s got zero to do with killing or not killing: my answer to the generalized question of “is it ever a good thing to kill somebody to save somebody else” is sure, of course, and that can be true even at 1:1 trade of lives.
Hell, it can be a good thing to kill somebody even if it’s not saving any lives. The more important ethical question in my mind is consent, because it’s a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what’s actually going on.
And even then, that’s not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that’s not because I have a deontological rule saying “never do that”, but because I’m reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I’m saving are myself and my wife and the person being killed is somebody attacking us, then I’m much less likely to have an issue with using lethal force.
Based on a glance at the paper you referenced, though, I’m going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I’m not 100% sure you can’t have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.
At the very least, what the paper is specifically saying is that people don’t like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious… and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent’s utility function, and it’s perfectly valid for an individual’s utility function to privilege friends and family.
The question isn’t can’t I, but why should I? The LCPW is a tool for strengthening an argument against something, it’s not something that requires a person to accept or answer arbitrary hypotheticals.
As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.
Studies that are not about real situations by their nature are not good for thinking about real world impacts of ideas. It’s hard enough to get studies that use real situations to replicate in a meaningful way in psychology. There’s no intellectual basis for thinking that you can reliably expolate from studies about hypothetical situations like that to real world behavior.
A philsopher is the kind of person who can switch from a very skeptic position like being unsure whether chairs really exist to believing that he can extrapolate hypnothetical data to make predictions about complex real world interactions in remarkable speed.