As a friend of ants, what’s good for ants is good for me, and what’s good for me is good for ants.
So you’re happy to donate some of your moral weight to ants.
But I don’t see how a vast increase in Earth’s ant population would be helpful to ants
Basically, if our AIs freed up resources by eliminating humans, then most current ant nests could found several daughter nests.
In that section I’m assuming the AI is using something resembling Utilitarian ethics, attempting to maximize the total utility, where ‘utility’ is something that can be summed up across individuals. So 20 quadrillion ants living good lives is approximately twice as good as only ten quadrillion ants living equally good lives, and forty quadrillion ants living good lives is twice as good again. As I discuss at a later point in the post, it’s possible to construct ethical systems that don’t have this property that (all things being equal) utility scales with population level, but something along the lines of Utilitarianism with linear summation of utility is usually the default assumption on Less Wrong (and indeed among many contemporary Ethical Philosophers).
Thanks. I think the default assumption you expanded on doesn’t match my view. Global ethical worth isn’t necessarily a finite quantity subject only to zero-sum games.
So you’re happy to donate some of your moral weight to ants.
I’m happy for any and all living beings to be in good health. I don’t lose any moral weight as a result. Quite the opposite: when I wish others well, I create for myself a bit of beneficial moral effect; it’s not donated to ants at my expense.
Ethical worth may not be finite, but resources are finite. If we value ants more, then that means we should give more resources to ants, which means that there are less resources to give to humans.
From your comments on how you value reducing ant suffering, I think your framework regarding ants seems to be “don’t harm them, but you don’t need to help them either”. So basically reducing suffering but not maximising happiness.
Utilitarianism says that you should also value the happiness of all beings with subjective experience, and that we should try to make them happier , which leads to the question of how to do this if we value animals. I’m a bit confused, how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?
how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?
I devote a bit of my limited time to helping ants and other beings as the opportunities arise. Giving limited resources in this way is a win-win; I share the rewards with the ants. In other words, they’re not benefiting at my expense; I am happy for their well-being, and in this way I also benefit from an effort such as placing an ant outdoors. A lack of infinite resources hasn’t been a problem; it just helps my equanimity and patience to mature.
Generally, though, all life on Earth evolved within a common context and it’s mutually beneficial for us all that this environment be unpolluted. The things that I do that benefit the ants also tend to benefit the local plants, bacteria, fungi, reptiles, mammals, etc. -- me included. The ants are content to eat a leaf of a plant I couldn’t digest. I can’t make them happier by feeding them my food or singing to them all day, as far as I can tell. If they’re not suffering, that’s as happy as they can be.
I think the same is true for humans: happiness and living without suffering are the same thing.
Unfortunately, it seems that we all suffer to some degree or another by the time we are born. So while I am in favor of reducing suffering among living beings, I am not in favor of designing new living beings. The best help we can give to hypothetical “future” beings is to care for the actually-living ones and those being born.
I think your views contradict utilitarianism. The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
You’re free to disagree with utilitarianism, but there’s a lot of work showing how it aligns pretty closely with most people’s moral intuitions. That’s why most folks around here find utilitarianism more appealing than type of ethics you seem to be espousing.
I think ethics is just a matter of preference, but I’d apply something like utilitarianism in most cases because it’s what I’d want applied if we picked a set of ethics to apply universally.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
there’s a lot of work showing how [utilitarianism] aligns pretty closely with most people’s moral intuitions
There is? Could you link to some examples?
It’s my understanding that utilitarianism does not align with most people’s moral intuitions, in fact. I would be at least moderately surprised to learn that the opposite is true.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
Utilitarianism, however, has many, many problems. How familiar are you with critiques of it?
I’m not arguing that utilitarianism is correct in any absolute sense, or that it aligns perfectly with moral intuitions. I was just trying to explain why so many people around here are so into it. I’m familiar with many critiques of utilitarianism. I’m not aware of any ethical system that aligns better with moral intuitions. No system is going to align perfectly with our moral intuitions because they’re not systematic.
What am I supposed to be getting out of that? Inasmuch as it is a half hearted defence of deontology, it isn’t a wholehearted defence of pure utilitarianism.
Eliezer is usually viewed as a Utilitarian, which would make him a consequentialist. His point in that article seems to be an acknowledgement that because human thinking is so prone to self-justification, deontology has its merits. Which I thought related to your point on caring about intentions as well as effects.
Rather the opposite. Utilitarianism cares about outcomes, so to first order it doesn’t factor intentions in at all. Of course, if someone intends to harm me, somehow fails and instead unintentionally does me good, while I haven’t been harmed yet, I do have a reasonable concern that they might try again, perhaps more successfully next time. So intentions matter under Utilitarianism to the extent that they can be used to predict the probabilities of outcomes. Plus of course to the extent that they hurt feelings or cause concern, and those are actual emotional harms.
My primary niggling concern with standard utilitarianism is “The Repugnant Conclusion”: the way it always wants to maximize population at the carrying capacity of the available resources, at the point where well-being per individual is already starting to go down significantly, by enough for the slope of the well-being against resources curve to counter-balance the increase in population. Everyone ends up on the edge of starvation. Which is, of course, what natural populations do. Admittedly, once you allow for things like resource depletion, or just the possibilities of famines/poor weather, that probably pushes that down a bit, to the point where you’re normally keeping some resource capacity reserve. But I just can’t shake the feeling that if we all just agreed to decrease the population by only ~20%, we’d all individually be ~10-15% happier. However I can’t see a good way to make the math balance, short of making utility mildly nonlinear in population, which seems really counterintuitive, and like it might give the wrong answers for gambles about loss of a lot of lives. [I’m thinking this through and might do a post if I come up with anything interesting.]
I do think it’s work giving some moral worth to a species too, so we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
In additional to the concept of utility of hypothetical future beings, there’s also the utility of the presently living members of that species who are alive thanks to the extinction-prevention efforts in this scenario.
The species is not extinct because these individuals are living. If you can help the last members of a species maintain good health for a long time, that’s good even if they can’t reproduce.
The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
Wouldn’t the hive need to have a subjective experience—collectively or as individuals—for it to be good to double their population in your example?
Whether they’re presently conscious or not, I wouldn’t want to bring ant-suffering into the world if I could avoid it. On the other hand, I do not interfere with them and it’s good to see them doing well in some places.
As for your five mentions of “utilitarianism.” I try to convey my view in the plainest terms. I do not mean to offend you or any -isms or -ologies of philosophy. I like reason and am here to learn what I can. Utilitarians are all friends to me.
I think ethics is just a matter of preference
I’m fine with that framing too. There are a lot of good preferences found commonly among sentient beings. Happiness is better than suffering precisely to the extent of preferences, i.e. ethics.
The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
Indeed. I was questioning the proposition by Seth Herd that a collective like ants does not have subjective experience and so “doubling the ant population is twice as good.” I didn’t follow that line of reasoning and wondered whether it might be a mistake.
Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy
I don’t think creating a copy of myself is possible without repeating at least the amount of suffering I have experienced. My copies would be happy, but so too would they suffer. I would opt out of the creation of unnecessary suffering. (Aside: I am canceling my cryopreservation plans after more than 15 years of Alcor membership.)
Likewise, injury, aging and death are perhaps not the only causes of suffering in ants. Birth could be suffering for them too.
We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?
You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.
(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)
One life like mine, that has experienced limited suffering and boundless happiness, is enough. Spinning up too many of these results in boundless suffering. I would not put this life on repeat, unlearning and relearning every lesson for eternity.
So you’re happy to donate some of your moral weight to ants.
Basically, if our AIs freed up resources by eliminating humans, then most current ant nests could found several daughter nests.
In that section I’m assuming the AI is using something resembling Utilitarian ethics, attempting to maximize the total utility, where ‘utility’ is something that can be summed up across individuals. So 20 quadrillion ants living good lives is approximately twice as good as only ten quadrillion ants living equally good lives, and forty quadrillion ants living good lives is twice as good again. As I discuss at a later point in the post, it’s possible to construct ethical systems that don’t have this property that (all things being equal) utility scales with population level, but something along the lines of Utilitarianism with linear summation of utility is usually the default assumption on Less Wrong (and indeed among many contemporary Ethical Philosophers).
Thanks. I think the default assumption you expanded on doesn’t match my view. Global ethical worth isn’t necessarily a finite quantity subject only to zero-sum games.
I’m happy for any and all living beings to be in good health. I don’t lose any moral weight as a result. Quite the opposite: when I wish others well, I create for myself a bit of beneficial moral effect; it’s not donated to ants at my expense.
Ethical worth may not be finite, but resources are finite. If we value ants more, then that means we should give more resources to ants, which means that there are less resources to give to humans.
From your comments on how you value reducing ant suffering, I think your framework regarding ants seems to be “don’t harm them, but you don’t need to help them either”. So basically reducing suffering but not maximising happiness.
Utilitarianism says that you should also value the happiness of all beings with subjective experience, and that we should try to make them happier , which leads to the question of how to do this if we value animals. I’m a bit confused, how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?
Great points and question, much appreciated.
I devote a bit of my limited time to helping ants and other beings as the opportunities arise. Giving limited resources in this way is a win-win; I share the rewards with the ants. In other words, they’re not benefiting at my expense; I am happy for their well-being, and in this way I also benefit from an effort such as placing an ant outdoors. A lack of infinite resources hasn’t been a problem; it just helps my equanimity and patience to mature.
Generally, though, all life on Earth evolved within a common context and it’s mutually beneficial for us all that this environment be unpolluted. The things that I do that benefit the ants also tend to benefit the local plants, bacteria, fungi, reptiles, mammals, etc. -- me included. The ants are content to eat a leaf of a plant I couldn’t digest. I can’t make them happier by feeding them my food or singing to them all day, as far as I can tell. If they’re not suffering, that’s as happy as they can be.
I think the same is true for humans: happiness and living without suffering are the same thing.
Unfortunately, it seems that we all suffer to some degree or another by the time we are born. So while I am in favor of reducing suffering among living beings, I am not in favor of designing new living beings. The best help we can give to hypothetical “future” beings is to care for the actually-living ones and those being born.
I think your views contradict utilitarianism. The moral worth resides in each individual, since they have a subjective experience of the world, while a collective like “ants” does not. So doubling the ant population is twice as good.
You’re free to disagree with utilitarianism, but there’s a lot of work showing how it aligns pretty closely with most people’s moral intuitions. That’s why most folks around here find utilitarianism more appealing than type of ethics you seem to be espousing.
I think ethics is just a matter of preference, but I’d apply something like utilitarianism in most cases because it’s what I’d want applied if we picked a set of ethics to apply universally.
You might want to read up on utilitarianism if you haven’t, because you’ll find it the starting point for many discussions of ethics on LessWrong.
There is? Could you link to some examples?
It’s my understanding that utilitarianism does not align with most people’s moral intuitions, in fact. I would be at least moderately surprised to learn that the opposite is true.
Utilitarianism, however, has many, many problems. How familiar are you with critiques of it?
I’m not arguing that utilitarianism is correct in any absolute sense, or that it aligns perfectly with moral intuitions. I was just trying to explain why so many people around here are so into it. I’m familiar with many critiques of utilitarianism. I’m not aware of any ethical system that aligns better with moral intuitions. No system is going to align perfectly with our moral intuitions because they’re not systematic.
Any system with a slot for intention does.
Have you read Eliezer’s Ends Don’t Justify Means (Among Humans)?
What am I supposed to be getting out of that? Inasmuch as it is a half hearted defence of deontology, it isn’t a wholehearted defence of pure utilitarianism.
Eliezer is usually viewed as a Utilitarian, which would make him a consequentialist. His point in that article seems to be an acknowledgement that because human thinking is so prone to self-justification, deontology has its merits. Which I thought related to your point on caring about intentions as well as effects.
It’s not a given that utilitarianism involves caring about intentions.
Rather the opposite. Utilitarianism cares about outcomes, so to first order it doesn’t factor intentions in at all. Of course, if someone intends to harm me, somehow fails and instead unintentionally does me good, while I haven’t been harmed yet, I do have a reasonable concern that they might try again, perhaps more successfully next time. So intentions matter under Utilitarianism to the extent that they can be used to predict the probabilities of outcomes. Plus of course to the extent that they hurt feelings or cause concern, and those are actual emotional harms.
Whose moral intuitions? Clearly not everyone’s. But most people’s? Is that your claim? Or only yours? Or most people’s on Less Wrong? Or…?
My primary niggling concern with standard utilitarianism is “The Repugnant Conclusion”: the way it always wants to maximize population at the carrying capacity of the available resources, at the point where well-being per individual is already starting to go down significantly, by enough for the slope of the well-being against resources curve to counter-balance the increase in population. Everyone ends up on the edge of starvation. Which is, of course, what natural populations do. Admittedly, once you allow for things like resource depletion, or just the possibilities of famines/poor weather, that probably pushes that down a bit, to the point where you’re normally keeping some resource capacity reserve. But I just can’t shake the feeling that if we all just agreed to decrease the population by only ~20%, we’d all individually be ~10-15% happier. However I can’t see a good way to make the math balance, short of making utility mildly nonlinear in population, which seems really counterintuitive, and like it might give the wrong answers for gambles about loss of a lot of lives. [I’m thinking this through and might do a post if I come up with anything interesting.]
I do think it’s work giving some moral worth to a species too, so we make increasing efforts to prevent extinction if a species’ population drops, but that’s basically just a convenient shorthand for the utility of future members of that species who cannot exist if it goes extinct.
In additional to the concept of utility of hypothetical future beings, there’s also the utility of the presently living members of that species who are alive thanks to the extinction-prevention efforts in this scenario.
The species is not extinct because these individuals are living. If you can help the last members of a species maintain good health for a long time, that’s good even if they can’t reproduce.
Wouldn’t the hive need to have a subjective experience—collectively or as individuals—for it to be good to double their population in your example?
Whether they’re presently conscious or not, I wouldn’t want to bring ant-suffering into the world if I could avoid it. On the other hand, I do not interfere with them and it’s good to see them doing well in some places.
As for your five mentions of “utilitarianism.” I try to convey my view in the plainest terms. I do not mean to offend you or any -isms or -ologies of philosophy. I like reason and am here to learn what I can. Utilitarians are all friends to me.
I’m fine with that framing too. There are a lot of good preferences found commonly among sentient beings. Happiness is better than suffering precisely to the extent of preferences, i.e. ethics.
The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
Indeed. I was questioning the proposition by Seth Herd that a collective like ants does not have subjective experience and so “doubling the ant population is twice as good.” I didn’t follow that line of reasoning and wondered whether it might be a mistake.
I don’t think creating a copy of myself is possible without repeating at least the amount of suffering I have experienced. My copies would be happy, but so too would they suffer. I would opt out of the creation of unnecessary suffering. (Aside: I am canceling my cryopreservation plans after more than 15 years of Alcor membership.)
Likewise, injury, aging and death are perhaps not the only causes of suffering in ants. Birth could be suffering for them too.
We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?
You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.
(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)
One life like mine, that has experienced limited suffering and boundless happiness, is enough. Spinning up too many of these results in boundless suffering. I would not put this life on repeat, unlearning and relearning every lesson for eternity.