Yes, I think it makes sense to judge our actions by their expected results, even though expected results like “shutting off life support for this coma patient leads to .1 fewer lives than keeping it on” are sometimes difficult to think clearly about.
If futures with more human beings in them are better by your moral lights than futures with fewer human beings in them, then it similarly makes sense to judge your actions by the expected number of human beings they cause to exist in the world, all else being equal.
More specifically, given the above it makes sense for the chances of a particular organism becoming a human being if allowed to survive to contribute to your moral valuation of allowing that organism to survive, all else being equal.
If the “more human beings ⇒ better” term in your moral equation is sufficiently strongly weighted so as to overpower the contribution of other terms, then you can delete the phrase “all else being equal” above. I don’t actually know anyone for whom this is true.
Even given that moral weighting, it doesn’t follow that “killing human beings is morally bad.” For example, if the expected result of killing George is that .1 more lives are saved than if we let George live, there’s at least a coherent moral argument for killing George. If 1000 more lives are saved, it’s even an emotionally compelling argument.
Again, it makes sense to me to judge actions in terms of the moral value of their expected results. In fact, that’s pretty much the only way it makes sense to me to judge actions. That means I consider questions like “is choosing to become pregnant good/bad?” or “is choosing to teminate a pregnancy good/bad?” or “is killing George good/bad?” somewhat ill-formed… “good” and “bad” properly apply to the expected results of an action (and even then, only relative to the results of other actions), not to the action itself. When talking about a specific action the difference doesn’t matter too much; we take the action as standing for its expected results by metonymy. When talking about a class of actions it starts to matter more.
It’s probably obvious by now that I’m a moral consequentialist. A deontologist (I know there’s at least one on LW) would disagree with most of my reasoning above.
Do we accept the view that “less human beings ⇒ worse”? If not, then why not kill people on sight? The only alternatives are “less human beings ⇒ better” and “less human beings ⇒ indifferent”. Obviously one can claim the latter but it seems counterintuitive to me. Also, the sentence “less human beings ⇒ worse” seems logically connected with “more human beings ⇒ better”.
I’m not quite sure if I understand your post correctly. Do you want me not to judge single separated action but considering all the alternatives and choosing the best one?
I fail to see any further consequeces of an action of “terminating a pregnancy” than “the pregnancy is terminated”.
Do we accept the view that “less human beings ⇒ worse”?
I can’t speak for anyone else, but I don’t accept this view. For example, I can easily imagine a future with N humans that I would happily choose over a different future with 2N humans, or 2^N humans.
In other words, there are many aspects of the future that I consider more important than how many human beings it holds.
If not, then why not kill people on sight?
You seem to be suggesting that the only reason not to kill people is because I want to maximize the number of humans in the world, so if I don’t want that it follows that I must have no reason not to kill people. That seems pretty bizarre to me.
Anyway, I suspect the primary reason I don’t kill people is squeamishness. Other reasons include fear of punishment and the belief that I can’t reliably predict the consequences of them dying, and in some cases the belief that most likely consequences of them dying are bad ones.
I’d think better of myself if that last one were the primary reason, but I’m fairly certain it isn’t.
The only alternatives are “less human beings ⇒ better” and “less human beings ⇒ indifferent”.
And also “less human beings ⇒ better or worse, it depends on other things.”
Do you want me not to judge single separated action but considering all the alternatives and choosing the best one?
I don’t think anyone can consider all the alternatives, but yes, I’d certainly recommend choosing the best of the alternatives you’re able to consider. I wouldn’t have thought this controversial?
I fail to see any further consequences of an action of “terminating a pregnancy” than “the pregnancy is terminated”.
I suspect I don’t understand what you mean to express, here. Can you contrast this with an action for which you are able to see downstream consequences?
I might seem short-sighted but I see a huge difference between the generic “human lives” and “human dies”. Of course I might reconsider when faced with the consequences of extending life of this exact human being, but generally, as a first approximation, I’m choosing his life over death. This is probably the point were we disagree. You refuse to provide any answer to this question without any further knowledge and I have a predefined answer which can be modified only in extreme cases.
Consider keeping a violent dictator of some small country in Africa alive. It’s consequences are not only “one man stays alive” but most certainly also “many thousands of other men die”. This might make me choose his death over life.
The worse part is I can’t really say what happens after he dies (because maybe just some of his fellows take his place).
Re: the violent dictator… as I said initially: “If 1000 more lives are saved, it’s even an emotionally compelling argument.”
Killing a Bad Person to save a thousand Innocent People is a relatively easy emotional equation, and that’s as true of me as it is for you.
As for the point where we disagree… I’m not certain we do disagree, actually.
If you’re asking me about a particular human whose fate is singularly brought to my attention, does it live or die, I almost undoubtedly let it live as long as that doesn’t cost very much to me or anyone I care about, or even if it does if the human is someone I happen know and like.
I don’t think we disagree on this point.
But if you ask me whether “less humans” is better or worse in general, which is what I thought you were asking about, I understand that to be a different question.
I am, right this moment, not raising a child. I’m not even siring one to be raised by others. In fact, I haven’t done either of those things in my life (as far as I know) and am very unlikely to in the future. I know that this results in fewer humans compared to a lifestyle of siring as many children as possible.
If “less humans ⇒ worse”, it follows that I’m choosing to make the world worse.
As I’ve said, I don’t believe that, so that doesn’t bother me. You seem to be claiming that you do believe that (as you say, without the need for any additional knowledge about the situation), so it seems to follow that you believe I’m making the world worse and that I should be siring as many children as possible.
Do you in fact believe that?
My guess is that you don’t, and that we don’t actually disagree as much as you seem to think we do.
I think the appearance of disagreement is in part because you’re switching the question around (from “is fewer humans worse?” to “would I let a human die, given a salient choice?”) in mid-conversation, and comparing my answer to the first question to your answer to the second question.
That might be a deliberate “bait and switch”, but my intuition is that you’re doing that because the question switches around in your own head as you think about it. Of course I don’t know for sure, but that’s a pretty common thing people do when thinking about emotionally difficult questions.
I like your reasoning. I think it clarified my outlook on the issue a lot. Thanks for taking time to over and over explain your view to a less rigorous thinker.
My .0002 babies:
Yes, I think it makes sense to judge our actions by their expected results, even though expected results like “shutting off life support for this coma patient leads to .1 fewer lives than keeping it on” are sometimes difficult to think clearly about.
If futures with more human beings in them are better by your moral lights than futures with fewer human beings in them, then it similarly makes sense to judge your actions by the expected number of human beings they cause to exist in the world, all else being equal.
More specifically, given the above it makes sense for the chances of a particular organism becoming a human being if allowed to survive to contribute to your moral valuation of allowing that organism to survive, all else being equal.
If the “more human beings ⇒ better” term in your moral equation is sufficiently strongly weighted so as to overpower the contribution of other terms, then you can delete the phrase “all else being equal” above. I don’t actually know anyone for whom this is true.
Even given that moral weighting, it doesn’t follow that “killing human beings is morally bad.” For example, if the expected result of killing George is that .1 more lives are saved than if we let George live, there’s at least a coherent moral argument for killing George. If 1000 more lives are saved, it’s even an emotionally compelling argument.
Again, it makes sense to me to judge actions in terms of the moral value of their expected results. In fact, that’s pretty much the only way it makes sense to me to judge actions. That means I consider questions like “is choosing to become pregnant good/bad?” or “is choosing to teminate a pregnancy good/bad?” or “is killing George good/bad?” somewhat ill-formed… “good” and “bad” properly apply to the expected results of an action (and even then, only relative to the results of other actions), not to the action itself. When talking about a specific action the difference doesn’t matter too much; we take the action as standing for its expected results by metonymy. When talking about a class of actions it starts to matter more.
It’s probably obvious by now that I’m a moral consequentialist. A deontologist (I know there’s at least one on LW) would disagree with most of my reasoning above.
Do we accept the view that “less human beings ⇒ worse”? If not, then why not kill people on sight? The only alternatives are “less human beings ⇒ better” and “less human beings ⇒ indifferent”. Obviously one can claim the latter but it seems counterintuitive to me. Also, the sentence “less human beings ⇒ worse” seems logically connected with “more human beings ⇒ better”.
I’m not quite sure if I understand your post correctly. Do you want me not to judge single separated action but considering all the alternatives and choosing the best one?
I fail to see any further consequeces of an action of “terminating a pregnancy” than “the pregnancy is terminated”.
I can’t speak for anyone else, but I don’t accept this view. For example, I can easily imagine a future with N humans that I would happily choose over a different future with 2N humans, or 2^N humans.
In other words, there are many aspects of the future that I consider more important than how many human beings it holds.
You seem to be suggesting that the only reason not to kill people is because I want to maximize the number of humans in the world, so if I don’t want that it follows that I must have no reason not to kill people. That seems pretty bizarre to me.
Anyway, I suspect the primary reason I don’t kill people is squeamishness. Other reasons include fear of punishment and the belief that I can’t reliably predict the consequences of them dying, and in some cases the belief that most likely consequences of them dying are bad ones.
I’d think better of myself if that last one were the primary reason, but I’m fairly certain it isn’t.
And also “less human beings ⇒ better or worse, it depends on other things.”
I don’t think anyone can consider all the alternatives, but yes, I’d certainly recommend choosing the best of the alternatives you’re able to consider. I wouldn’t have thought this controversial?
I suspect I don’t understand what you mean to express, here. Can you contrast this with an action for which you are able to see downstream consequences?
I might seem short-sighted but I see a huge difference between the generic “human lives” and “human dies”. Of course I might reconsider when faced with the consequences of extending life of this exact human being, but generally, as a first approximation, I’m choosing his life over death. This is probably the point were we disagree. You refuse to provide any answer to this question without any further knowledge and I have a predefined answer which can be modified only in extreme cases.
Consider keeping a violent dictator of some small country in Africa alive. It’s consequences are not only “one man stays alive” but most certainly also “many thousands of other men die”. This might make me choose his death over life.
The worse part is I can’t really say what happens after he dies (because maybe just some of his fellows take his place).
Re: the violent dictator… as I said initially: “If 1000 more lives are saved, it’s even an emotionally compelling argument.”
Killing a Bad Person to save a thousand Innocent People is a relatively easy emotional equation, and that’s as true of me as it is for you.
As for the point where we disagree… I’m not certain we do disagree, actually.
If you’re asking me about a particular human whose fate is singularly brought to my attention, does it live or die, I almost undoubtedly let it live as long as that doesn’t cost very much to me or anyone I care about, or even if it does if the human is someone I happen know and like.
I don’t think we disagree on this point.
But if you ask me whether “less humans” is better or worse in general, which is what I thought you were asking about, I understand that to be a different question.
I am, right this moment, not raising a child. I’m not even siring one to be raised by others. In fact, I haven’t done either of those things in my life (as far as I know) and am very unlikely to in the future. I know that this results in fewer humans compared to a lifestyle of siring as many children as possible.
If “less humans ⇒ worse”, it follows that I’m choosing to make the world worse.
As I’ve said, I don’t believe that, so that doesn’t bother me. You seem to be claiming that you do believe that (as you say, without the need for any additional knowledge about the situation), so it seems to follow that you believe I’m making the world worse and that I should be siring as many children as possible.
Do you in fact believe that?
My guess is that you don’t, and that we don’t actually disagree as much as you seem to think we do.
I think the appearance of disagreement is in part because you’re switching the question around (from “is fewer humans worse?” to “would I let a human die, given a salient choice?”) in mid-conversation, and comparing my answer to the first question to your answer to the second question.
That might be a deliberate “bait and switch”, but my intuition is that you’re doing that because the question switches around in your own head as you think about it. Of course I don’t know for sure, but that’s a pretty common thing people do when thinking about emotionally difficult questions.
I like your reasoning. I think it clarified my outlook on the issue a lot. Thanks for taking time to over and over explain your view to a less rigorous thinker.
You are entirely welcome.