In general, the correctness of [a principle] is one matter; the correctness of accepting it, quite another. I think you conflate the claims consequentialism is true and naive consequentialist decision procedures are optimal. Even if we have decisive epistemic reason to accept consequentialism (of some sort), we may have decisive moral or prudential reason to use non-consequentialist decision procedures. So I would at least narrow your claims to consequentialist decision procedures.
evolution as a force typically acts on collectives, not individuals.
I’m not sure what you’re asserting here or how it’s relevant. Can you be more specific?
I think I’d phrase the key insight I see in “consequentialism might harm survival” different: consequentialism is computationally expensive, and sometimes you don’t have the resources to produce the desired outcome because you don’t have the time, energy, or ability to work out all the details. Thus, short-circuited consequential can produce worse results than other moral philosophies.
That being said, fully executed consequentialism can deal with circumstances other approaches might have a harder time with. For example, deontology works well if the rules match the environment you’re operating in. Drop into a new environment at the rules might no longer be well adapted to produce good outcomes. Similarly for virtue ethics: what’s virtuous and produces good outcomes might be different in different contexts, and so may more struggle to adapt in consequentialism.
In all cases it seems to be a matter of when the moral calculations were performed. In consequentialism they happen just in time, and so we may fail to do enough of them to generate good results. In others, we do them ahead of time, which means we may have computed the right answer for the wrong situation and not have a good way to generate something better quickly because the mechanism of determining rules or virtues happens over decades or centuries of cultural evolution.
As an individual defecting from a societal norm, consequentialism may increase the chances of the individual surviving. But if consequentalism becomes a societal norm, it may reduce the chances of the collective surviving.
Wait, what? You’re saying that all the individuals survived, but the collective didn’t? That seems to be saying that a particular organizational configuration ceased to exist, but not that everybody died. The phrasing here is ambiguous
If this is true, then societies composed of consequentalists will die faster than societies composed of deontologists. And evolution as a force typically acts on collectives, not individuals.
This just seems confused. Evolution acts on individuals, unless you’re talking about the force of evolution operating (again) on organizational configuration rather than genetics. But societies in such cases often “evolve” by changing rules and structures, not always by collapsing and being replaced.
Section 1 of the paper has a lot of good examples on how consequentalists are more likely to break laws, violate others’ rights and so on—when the consequences justify it. It also talks about and references a number of older papers on how such agents find it harder to form both social and business relationships, to generate trust to solve any coordination problems.
This sounds like naive consequentialism, not LessWrong-style consequentialism. A proper consequentialist decision theory takes into account long-term effects of making certain types of choices, not just the short-term effects of individual choices.
(That is, a proper consequentialist foresees that being the kind of person who breaks agreements for short-term benefits has long-term negative consequences, and so they don’t do that.)
re: second quote, I mean that evolution selects for those traits that ensure collective survival
It really, really doesn’t. It selects for the proliferation of genes that proliferate, which is very, very different.
A trait where “one person is willing to kill 10 others to ensure their own survival” will be less selected for compared to one where “one person is willing to die to save someone else”.
No, it selects for “one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation”. If there were no correlation between the trait and relatedness, the trait would be extinguished.
(And the being willing to kill 10 others isn’t deselected for either, so long as the others are strangers or rivals for resources, mates, etc.)
Selection works on relative frequency of genes, not on groups or individuals. To the extent that we have any sort of group feeling or behaviors at all, this is due to commonality of genes. A gene won’t be universal in a population unless it provides its carriers with some sort of advantage over non-carriers. If there’s no individual advantage (or at least gene-specific advantage), it won’t become universal.
Suppose your friend asks you a situation, purely as hypothetical, as to whether you would murder someone to save two others. You simply answering this question indicating you’re willing to murder reduces trust with your friend.
This sounds less like “consequentialism reduces trust” than “willingness to murder reduces trust” or perhaps “utilitarianism reduces trust”.
Now maybe LessWrong-style consequentialism requires you to lie to your friend, that hasn’t been studied.
I would expect a LW-style consequentialist to reject such a simple framework as “kill one person to save two” without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.
Really, I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.
Doesn’t “sufficiently close relation” also apply with some strength to any being of the same species? Consider a species A is splitting into two subspecies A1 and A2. This could be due to members of A1 preferring to save other members of A1. Once A2 dies, A1 retains the trait of wanting to save other members of A1.
Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.
I would be interested in knowing the Least Convenient World stipulations, and what this phrase means.
Precedents and perverse incentives can be ruled out by assuming none exist, right? Assume in the hypothetical that nobody will ever get to know what choice you made after you made it.
But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it’s a hypothetical, the most relevant incentives and consequences are those for the social situation.
I didn’t get how a hypothetical with two clear choices could be a false dichotomy. Assume that refusing to choose results in something far worse than either choice.
Far worse for whom? In what way? Consequentialism isn’t utilitarianism. If you’re taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn’t utilitarianism: you can choose what’s best for you, personally, and what’s best for me depends heavily on the details.
I agree but that in my mind seems like a lot like—their feelings and values are wired deontologically, their rational brain (incorrectly) thinks they are consequentialists, and they’re finding justifications for their thoughts. Unless ofcourse they find a really good justification. (And even if they did find one, I’d be suspicious of whether the justifcation came after the feeling or action, …. or before.)
But that’s you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.
My experience of being asked a utilitarian hypothetical is, “what am I going to get out of answering this stupid hypothetical?” And mostly the answer is, “nothing good”. So I’m going to attack the premise right away. It’s got zero to do with killing or not killing: my answer to the generalized question of “is it ever a good thing to kill somebody to save somebody else” is sure, of course, and that can be true even at 1:1 trade of lives.
Hell, it can be a good thing to kill somebody even if it’s not saving any lives. The more important ethical question in my mind is consent, because it’s a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what’s actually going on.
And even then, that’s not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that’s not because I have a deontological rule saying “never do that”, but because I’m reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I’m saving are myself and my wife and the person being killed is somebody attacking us, then I’m much less likely to have an issue with using lethal force.
Based on a glance at the paper you referenced, though, I’m going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I’m not 100% sure you can’t have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.
At the very least, what the paper is specifically saying is that people don’t like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious… and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent’s utility function, and it’s perfectly valid for an individual’s utility function to privilege friends and family.
The question isn’t can’t I, but why should I? The LCPW is a tool for strengthening an argument against something, it’s not something that requires a person to accept or answer arbitrary hypotheticals.
As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.
Some of the studies don’t even involve real situations, they’re purely hypothetical.
Studies that are not about real situations by their nature are not good for thinking about real world impacts of ideas. It’s hard enough to get studies that use real situations to replicate in a meaningful way in psychology. There’s no intellectual basis for thinking that you can reliably expolate from studies about hypothetical situations like that to real world behavior.
A philsopher is the kind of person who can switch from a very skeptic position like being unsure whether chairs really exist to believing that he can extrapolate hypnothetical data to make predictions about complex real world interactions in remarkable speed.
The simplest version is that deontological beings can be aligned over anything. Prisoner’s dilemma? No problem, just use “I will not defect” as a deontological virtue. Both beings will automatically cooperate.
But why that rule, not another? It’s a moral rule because it leads to desirable consequences. So deontology isn’t sharply distinct from consequentialism. But it can still have advantages over altruistic consequentialism because it allows agents to cooperate even if they are out of contact .
A lot of discourse on this site implicitly assumes that being rational increases odds of survival.
Individual or group survival? If you refuse to fight in a war to defend your your community, that’s good for your survival , but bad for your community’s survival. Individual and group values are different, which is why morality is different from rationality.
And altruism versus selfishness is the real crux. You tip the scales against consequentualisn by treating it as selfish consequentialism. Altruistic consequentialism is very different to selfish consequentialism, but not very different to deontology.
It seems like your argument is that Causal Decision Theory leads to defection on prisoner dilema and you consider causal decision theory as an essential feature of being consequentialist.
The sequences advocate Timeless Decision Theory and later Functional Decision Theory was proposed to solve those problems. If you want to convince people on LessWrong that consquequentialism is flawed you likely need to make arguments that don’t just work against Causal Decision Theory but also Timeless Decision Theory and Functional Decision Theory.
I feel you are taking some concepts that you think aren’t very well defined, and throwing them away, replacing them with nothing.
I admit that the intuitive notions of “morality” are not fully rigorous, but they are still far from total gibberish. Some smart philosopher may come along and find a good formal definition.
“Survival” is the closest we have to an objective moral or rational determinant.
Whether or not a human survives is an objective question. The amount of hair they have is similarly objective. So is the amount of laughing they have done, or the amount of mathematical facts they know.
All of these have ambiguity of definition, has a braindead body with a beating heart “survived”? This is a question of how you define “survive”. And once you define that, its objective.
There is nothing special about survival, except to the extent that some part of ourselves already cares about it.
And evolution as a force typically acts on collectives, not individuals.
Evolution doesn’t effect any individual in particular. There is no individual moth who evolved to be dark. It acts on the population of moths as a whole. But evolution selects for the individuals that put themselves ahead. Often this means individuals that cheat to benefit themselves at the expense of the species. (Cooperative behaviour is favoured when creatures have long memories and a good reputation is a big survival advantage. Stab your hunting partner in the back to double your share once, and no one will ever hunt with you again.)
I’ve read a statement that goes something like “obviously utilitarianism is the correct moral rule, but deontology does the greatest good for the greatest number”. I may be misremembering the exact statement, but this post reminds me quite a bit of that.
Deontology with correct rules is indistiguishable from Consequentialism with aligned goals and perfect information. The same actions will be chosen via each method
Presumably, optimisation of consequences. But the catch is that they would need to be infinitely complex to match a consequentualist calculation of unlimited complexity
In general, the correctness of [a principle] is one matter; the correctness of accepting it, quite another. I think you conflate the claims consequentialism is true and naive consequentialist decision procedures are optimal. Even if we have decisive epistemic reason to accept consequentialism (of some sort), we may have decisive moral or prudential reason to use non-consequentialist decision procedures. So I would at least narrow your claims to consequentialist decision procedures.
I’m not sure what you’re asserting here or how it’s relevant. Can you be more specific?
I think I’d phrase the key insight I see in “consequentialism might harm survival” different: consequentialism is computationally expensive, and sometimes you don’t have the resources to produce the desired outcome because you don’t have the time, energy, or ability to work out all the details. Thus, short-circuited consequential can produce worse results than other moral philosophies.
That being said, fully executed consequentialism can deal with circumstances other approaches might have a harder time with. For example, deontology works well if the rules match the environment you’re operating in. Drop into a new environment at the rules might no longer be well adapted to produce good outcomes. Similarly for virtue ethics: what’s virtuous and produces good outcomes might be different in different contexts, and so may more struggle to adapt in consequentialism.
In all cases it seems to be a matter of when the moral calculations were performed. In consequentialism they happen just in time, and so we may fail to do enough of them to generate good results. In others, we do them ahead of time, which means we may have computed the right answer for the wrong situation and not have a good way to generate something better quickly because the mechanism of determining rules or virtues happens over decades or centuries of cultural evolution.
Wait, what? You’re saying that all the individuals survived, but the collective didn’t? That seems to be saying that a particular organizational configuration ceased to exist, but not that everybody died. The phrasing here is ambiguous
This just seems confused. Evolution acts on individuals, unless you’re talking about the force of evolution operating (again) on organizational configuration rather than genetics. But societies in such cases often “evolve” by changing rules and structures, not always by collapsing and being replaced.
This sounds like naive consequentialism, not LessWrong-style consequentialism. A proper consequentialist decision theory takes into account long-term effects of making certain types of choices, not just the short-term effects of individual choices.
(That is, a proper consequentialist foresees that being the kind of person who breaks agreements for short-term benefits has long-term negative consequences, and so they don’t do that.)
It really, really doesn’t. It selects for the proliferation of genes that proliferate, which is very, very different.
No, it selects for “one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation”. If there were no correlation between the trait and relatedness, the trait would be extinguished.
(And the being willing to kill 10 others isn’t deselected for either, so long as the others are strangers or rivals for resources, mates, etc.)
Selection works on relative frequency of genes, not on groups or individuals. To the extent that we have any sort of group feeling or behaviors at all, this is due to commonality of genes. A gene won’t be universal in a population unless it provides its carriers with some sort of advantage over non-carriers. If there’s no individual advantage (or at least gene-specific advantage), it won’t become universal.
This sounds less like “consequentialism reduces trust” than “willingness to murder reduces trust” or perhaps “utilitarianism reduces trust”.
I would expect a LW-style consequentialist to reject such a simple framework as “kill one person to save two” without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.
Really, I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.
Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.
See The Least Convenient Possible World for where the term was introduced.
But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it’s a hypothetical, the most relevant incentives and consequences are those for the social situation.
Far worse for whom? In what way? Consequentialism isn’t utilitarianism. If you’re taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn’t utilitarianism: you can choose what’s best for you, personally, and what’s best for me depends heavily on the details.
But that’s you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.
My experience of being asked a utilitarian hypothetical is, “what am I going to get out of answering this stupid hypothetical?” And mostly the answer is, “nothing good”. So I’m going to attack the premise right away. It’s got zero to do with killing or not killing: my answer to the generalized question of “is it ever a good thing to kill somebody to save somebody else” is sure, of course, and that can be true even at 1:1 trade of lives.
Hell, it can be a good thing to kill somebody even if it’s not saving any lives. The more important ethical question in my mind is consent, because it’s a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what’s actually going on.
And even then, that’s not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that’s not because I have a deontological rule saying “never do that”, but because I’m reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I’m saving are myself and my wife and the person being killed is somebody attacking us, then I’m much less likely to have an issue with using lethal force.
Based on a glance at the paper you referenced, though, I’m going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I’m not 100% sure you can’t have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.
At the very least, what the paper is specifically saying is that people don’t like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious… and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent’s utility function, and it’s perfectly valid for an individual’s utility function to privilege friends and family.
The question isn’t can’t I, but why should I? The LCPW is a tool for strengthening an argument against something, it’s not something that requires a person to accept or answer arbitrary hypotheticals.
As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.
Studies that are not about real situations by their nature are not good for thinking about real world impacts of ideas. It’s hard enough to get studies that use real situations to replicate in a meaningful way in psychology. There’s no intellectual basis for thinking that you can reliably expolate from studies about hypothetical situations like that to real world behavior.
A philsopher is the kind of person who can switch from a very skeptic position like being unsure whether chairs really exist to believing that he can extrapolate hypnothetical data to make predictions about complex real world interactions in remarkable speed.
But why that rule, not another? It’s a moral rule because it leads to desirable consequences. So deontology isn’t sharply distinct from consequentialism. But it can still have advantages over altruistic consequentialism because it allows agents to cooperate even if they are out of contact .
Individual or group survival? If you refuse to fight in a war to defend your your community, that’s good for your survival , but bad for your community’s survival. Individual and group values are different, which is why morality is different from rationality.
And altruism versus selfishness is the real crux. You tip the scales against consequentualisn by treating it as selfish consequentialism. Altruistic consequentialism is very different to selfish consequentialism, but not very different to deontology.
Odds of who’s survival?
It seems like your argument is that Causal Decision Theory leads to defection on prisoner dilema and you consider causal decision theory as an essential feature of being consequentialist.
The sequences advocate Timeless Decision Theory and later Functional Decision Theory was proposed to solve those problems. If you want to convince people on LessWrong that consquequentialism is flawed you likely need to make arguments that don’t just work against Causal Decision Theory but also Timeless Decision Theory and Functional Decision Theory.
I feel you are taking some concepts that you think aren’t very well defined, and throwing them away, replacing them with nothing.
I admit that the intuitive notions of “morality” are not fully rigorous, but they are still far from total gibberish. Some smart philosopher may come along and find a good formal definition.
“Survival” is the closest we have to an objective moral or rational determinant.
Whether or not a human survives is an objective question. The amount of hair they have is similarly objective. So is the amount of laughing they have done, or the amount of mathematical facts they know.
All of these have ambiguity of definition, has a braindead body with a beating heart “survived”? This is a question of how you define “survive”. And once you define that, its objective.
There is nothing special about survival, except to the extent that some part of ourselves already cares about it.
Evolution doesn’t effect any individual in particular. There is no individual moth who evolved to be dark. It acts on the population of moths as a whole. But evolution selects for the individuals that put themselves ahead. Often this means individuals that cheat to benefit themselves at the expense of the species. (Cooperative behaviour is favoured when creatures have long memories and a good reputation is a big survival advantage. Stab your hunting partner in the back to double your share once, and no one will ever hunt with you again.)
I’ve read a statement that goes something like “obviously utilitarianism is the correct moral rule, but deontology does the greatest good for the greatest number”. I may be misremembering the exact statement, but this post reminds me quite a bit of that.
Eliezer said something similar:
Deontology with correct rules is indistiguishable from Consequentialism with aligned goals and perfect information. The same actions will be chosen via each method
What exactly does ‘correct’ mean here?
Presumably, optimisation of consequences. But the catch is that they would need to be infinitely complex to match a consequentualist calculation of unlimited complexity