I think your views contradict utilitarianism. The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
You’re free to disagree with utilitarianism, but there’s a lot of work showing how it aligns pretty closely with most people’s moral intuitions. That’s why most folks around here find utilitarianism more appealing than type of ethics you seem to be espousing.
I think ethics is just a matter of preference, but I’d apply something like utilitarianism in most cases because it’s what I’d want applied if we picked a set of ethics to apply universally.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
there’s a lot of work showing how [utilitarianism] aligns pretty closely with most people’s moral intuitions
There is? Could you link to some examples?
It’s my understanding that utilitarianism does not align with most people’s moral intuitions, in fact. I would be at least moderately surprised to learn that the opposite is true.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
Utilitarianism, however, has many, many problems. How familiar are you with critiques of it?
I’m not arguing that utilitarianism is correct in any absolute sense, or that it aligns perfectly with moral intuitions. I was just trying to explain why so many people around here are so into it. I’m familiar with many critiques of utilitarianism. I’m not aware of any ethical system that aligns better with moral intuitions. No system is going to align perfectly with our moral intuitions because they’re not systematic.
What am I supposed to be getting out of that? Inasmuch as it is a half hearted defence of deontology, it isn’t a wholehearted defence of pure utilitarianism.
Eliezer is usually viewed as a Utilitarian, which would make him a consequentialist. His point in that article seems to be an acknowledgement that because human thinking is so prone to self-justification, deontology has its merits. Which I thought related to your point on caring about intentions as well as effects.
Rather the opposite. Utilitarianism cares about outcomes, so to first order it doesn’t factor intentions in at all. Of course, if someone intends to harm me, somehow fails and instead unintentionally does me good, while I haven’t been harmed yet, I do have a reasonable concern that they might try again, perhaps more successfully next time. So intentions matter under Utilitarianism to the extent that they can be used to predict the probabilities of outcomes. Plus of course to the extent that they hurt feelings or cause concern, and those are actual emotional harms.
My primary niggling concern with standard utilitarianism is “The Repugnant Conclusion”: the way it always wants to maximize population at the carrying capacity of the available resources, at the point where well-being per individual is already starting to go down significantly, by enough for the slope of the well-being against resources curve to counter-balance the increase in population. Everyone ends up on the edge of starvation. Which is, of course, what natural populations do. Admittedly, once you allow for things like resource depletion, or just the possibilities of famines/poor weather, that probably pushes that down a bit, to the point where you’re normally keeping some resource capacity reserve. But I just can’t shake the feeling that if we all just agreed to decrease the population by only ~20%, we’d all individually be ~10-15% happier. However I can’t see a good way to make the math balance, short of making utility mildly nonlinear in population, which seems really counterintuitive, and like it might give the wrong answers for gambles about loss of a lot of lives. [I’m thinking this through and might do a post if I come up with anything interesting.]
I do think it’s work giving some moral worth to a species too, so we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
In additional to the concept of utility of hypothetical future beings, there’s also the utility of the presently living members of that species who are alive thanks to the extinction-prevention efforts in this scenario.
The species is not extinct because these individuals are living. If you can help the last members of a species maintain good health for a long time, that’s good even if they can’t reproduce.
The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
Wouldn’t the hive need to have a subjective experience—collectively or as individuals—for it to be good to double their population in your example?
Whether they’re presently conscious or not, I wouldn’t want to bring ant-suffering into the world if I could avoid it. On the other hand, I do not interfere with them and it’s good to see them doing well in some places.
As for your five mentions of “utilitarianism.” I try to convey my view in the plainest terms. I do not mean to offend you or any -isms or -ologies of philosophy. I like reason and am here to learn what I can. Utilitarians are all friends to me.
I think ethics is just a matter of preference
I’m fine with that framing too. There are a lot of good preferences found commonly among sentient beings. Happiness is better than suffering precisely to the extent of preferences, i.e. ethics.
The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
Indeed. I was questioning the proposition by Seth Herd that a collective like ants does not have subjective experience and so “doubling the ant population is twice as good.” I didn’t follow that line of reasoning and wondered whether it might be a mistake.
Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy
I don’t think creating a copy of myself is possible without repeating at least the amount of suffering I have experienced. My copies would be happy, but so too would they suffer. I would opt out of the creation of unnecessary suffering. (Aside: I am canceling my cryopreservation plans after more than 15 years of Alcor membership.)
Likewise, injury, aging and death are perhaps not the only causes of suffering in ants. Birth could be suffering for them too.
We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?
You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.
(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)
One life like mine, that has experienced limited suffering and boundless happiness, is enough. Spinning up too many of these results in boundless suffering. I would not put this life on repeat, unlearning and relearning every lesson for eternity.
I think your views contradict utilitarianism. The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
You’re free to disagree with utilitarianism, but there’s a lot of work showing how it aligns pretty closely with most people’s moral intuitions. That’s why most folks around here find utilitarianism more appealing than type of ethics you seem to be espousing.
I think ethics is just a matter of preference, but I’d apply something like utilitarianism in most cases because it’s what I’d want applied if we picked a set of ethics to apply universally.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
There is? Could you link to some examples?
It’s my understanding that utilitarianism does not align with most people’s moral intuitions, in fact. I would be at least moderately surprised to learn that the opposite is true.
Utilitarianism, however, has many, many problems. How familiar are you with critiques of it?
I’m not arguing that utilitarianism is correct in any absolute sense, or that it aligns perfectly with moral intuitions. I was just trying to explain why so many people around here are so into it. I’m familiar with many critiques of utilitarianism. I’m not aware of any ethical system that aligns better with moral intuitions. No system is going to align perfectly with our moral intuitions because they’re not systematic.
Any system with a slot for intention does.
Have you read Eliezer’s Ends Don’t Justify Means (Among Humans)?
What am I supposed to be getting out of that? Inasmuch as it is a half hearted defence of deontology, it isn’t a wholehearted defence of pure utilitarianism.
Eliezer is usually viewed as a Utilitarian, which would make him a consequentialist. His point in that article seems to be an acknowledgement that because human thinking is so prone to self-justification, deontology has its merits. Which I thought related to your point on caring about intentions as well as effects.
It’s not a given that utilitarianism involves caring about intentions.
Rather the opposite. Utilitarianism cares about outcomes, so to first order it doesn’t factor intentions in at all. Of course, if someone intends to harm me, somehow fails and instead unintentionally does me good, while I haven’t been harmed yet, I do have a reasonable concern that they might try again, perhaps more successfully next time. So intentions matter under Utilitarianism to the extent that they can be used to predict the probabilities of outcomes. Plus of course to the extent that they hurt feelings or cause concern, and those are actual emotional harms.
Whose moral intuitions? Clearly not everyone’s. But most people’s? Is that your claim? Or only yours? Or most people’s on Less Wrong? Or…?
My primary niggling concern with standard utilitarianism is “The Repugnant Conclusion”: the way it always wants to maximize population at the carrying capacity of the available resources, at the point where well-being per individual is already starting to go down significantly, by enough for the slope of the well-being against resources curve to counter-balance the increase in population. Everyone ends up on the edge of starvation. Which is, of course, what natural populations do. Admittedly, once you allow for things like resource depletion, or just the possibilities of famines/poor weather, that probably pushes that down a bit, to the point where you’re normally keeping some resource capacity reserve. But I just can’t shake the feeling that if we all just agreed to decrease the population by only ~20%, we’d all individually be ~10-15% happier. However I can’t see a good way to make the math balance, short of making utility mildly nonlinear in population, which seems really counterintuitive, and like it might give the wrong answers for gambles about loss of a lot of lives. [I’m thinking this through and might do a post if I come up with anything interesting.]
I do think it’s work giving some moral worth to a species too, so we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
In additional to the concept of utility of hypothetical future beings, there’s also the utility of the presently living members of that species who are alive thanks to the extinction-prevention efforts in this scenario.
The species is not extinct because these individuals are living. If you can help the last members of a species maintain good health for a long time, that’s good even if they can’t reproduce.
Wouldn’t the hive need to have a subjective experience—collectively or as individuals—for it to be good to double their population in your example?
Whether they’re presently conscious or not, I wouldn’t want to bring ant-suffering into the world if I could avoid it. On the other hand, I do not interfere with them and it’s good to see them doing well in some places.
As for your five mentions of “utilitarianism.” I try to convey my view in the plainest terms. I do not mean to offend you or any -isms or -ologies of philosophy. I like reason and am here to learn what I can. Utilitarians are all friends to me.
I’m fine with that framing too. There are a lot of good preferences found commonly among sentient beings. Happiness is better than suffering precisely to the extent of preferences, i.e. ethics.
The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
Indeed. I was questioning the proposition by Seth Herd that a collective like ants does not have subjective experience and so “doubling the ant population is twice as good.” I didn’t follow that line of reasoning and wondered whether it might be a mistake.
I don’t think creating a copy of myself is possible without repeating at least the amount of suffering I have experienced. My copies would be happy, but so too would they suffer. I would opt out of the creation of unnecessary suffering. (Aside: I am canceling my cryopreservation plans after more than 15 years of Alcor membership.)
Likewise, injury, aging and death are perhaps not the only causes of suffering in ants. Birth could be suffering for them too.
We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?
You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.
(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)
One life like mine, that has experienced limited suffering and boundless happiness, is enough. Spinning up too many of these results in boundless suffering. I would not put this life on repeat, unlearning and relearning every lesson for eternity.