People love to have their cake and eat it too. They want to maintain that they have no preferences about how the future of the universe turns out (and therefore can’t be called out on any particular of those preferences), and yet also spend resources affecting the future. As my tone suggests, I think this is wrong, and that arguments for such a position are rationalizations.
Why rationalize? To defend the way you currently make decisions against abstract arguments that you should change how you make decisions. But just because people use rationalizations to oppose those abstract arguments, doesn’t mean the abstract arguments are right.
I think the assumption that there is one correct population ethics is wrong, and that it’s totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best, and it’s fine if their preferences don’t follow simple rules (because human preferences are complicated). But this is a somewhat unpalatable bullet to bite for many, and I think it isn’t really the intuitive argument to hit on to defend yourself (realism/universalism is intuitive).
I don’t understand which views you are attributing to me, which to population ethicists, and which to people in general? Do you think I am claiming not to have preferences about the future? (That’s not right.) Or what exactly? Sorry, I’m confused.
I am accusing the first half of your post of being straight-up bad and agreeing with parts of the second half. To me, it reads like you threw up whatever objections came to hand, making it seem like first you decided to defend your current way of making decisions, and only second did you start listing arguments.
But you did claim to have your own preferences in the second half. My nitpick would be that you pit this against altruism—but instead you should be following something like Egan’s law (“It all adds up to normality”). There’s stuff we call altruistic in the real world, and also in the real world people have their own preferences. Egan’s law says that you should not take this to mean that the stuff we call altruistic is a lie and the world is actually strange. Instead, the stuff we call altruistic is an expression and natural consequence of peoples’ own values.
I’m a bit confused about what exactly you mean, and if I attribute to you a view that you do not hold, please correct me.
I think the assumption that there is one correct population ethics is wrong, and that it’s totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best
This kind of argument has always puzzled me. Your ethical principles are axioms, you define them to be correct, and this should compel you to believe that everybody else’s ethics, insofar as they violate those axioms, are wrong. This is where the “objectivity” comes from. It doesn’t matter what other people’s ethics are, my ethical principles are objectively the way they are, and that is all the objectivity I need.
Imagine there were a group of people who used a set of axioms for counting (Natural Numbers) that violated the Peano axioms in some straightforward way, such that they came to a different conclusion about how much 5+3 is. What do you think the significance of that should be for your mathematical understanding? My guess is “those people are wrong, I don’t care what they believe. I don’t want to needlessly offend them, but that doesn’t change anything about how I view the world, or how we should construct our technological devices.”
Likewise, if a deontologist says “Human challenge trials for covid are wrong, because [deontological reason]”, my reaction to that (I’m a utilitarian) is pretty much the same.
I understand that there are different kinds of people with vastly different preferences for what we should try to optimize for (or whether we should try to optimize for anything at all), but why should that stop me from being persuaded by arguments that honor the axioms I believe in, or why should I consider arguments that rely on axioms I reject?
I realize I’ll never be able to change a deontologist’s mind using utilitarian arguments, and that’s fine. When the longtermists use utilitarian arguments to argue in favor of longtermism, they assume that the recipient is already a utilitarian, or at least that he can be persuaded to become one.
Basically this. I think that the moral anti-realists are right and there’s no single correct morality, including population ethics. (Corollary: There’s no wrong morals except from perspective or for signalling purposes.)
(Corollary: There’s no wrong morals except from perspective or for signalling purposes.)
Do you consider perspective something experiential or is it conceptual? If the former, is there a shared perspective of sentient life in some respects? E.g. “suffering feels bad”.
People love to have their cake and eat it too. They want to maintain that they have no preferences about how the future of the universe turns out (and therefore can’t be called out on any particular of those preferences), and yet also spend resources affecting the future. As my tone suggests, I think this is wrong, and that arguments for such a position are rationalizations.
Why rationalize? To defend the way you currently make decisions against abstract arguments that you should change how you make decisions. But just because people use rationalizations to oppose those abstract arguments, doesn’t mean the abstract arguments are right.
I think the assumption that there is one correct population ethics is wrong, and that it’s totally fine for each person to have different preferences about the future of the universe just like they have preferences about what ice cream is best, and it’s fine if their preferences don’t follow simple rules (because human preferences are complicated). But this is a somewhat unpalatable bullet to bite for many, and I think it isn’t really the intuitive argument to hit on to defend yourself (realism/universalism is intuitive).
I don’t understand which views you are attributing to me, which to population ethicists, and which to people in general? Do you think I am claiming not to have preferences about the future? (That’s not right.) Or what exactly? Sorry, I’m confused.
I am accusing the first half of your post of being straight-up bad and agreeing with parts of the second half. To me, it reads like you threw up whatever objections came to hand, making it seem like first you decided to defend your current way of making decisions, and only second did you start listing arguments.
But you did claim to have your own preferences in the second half. My nitpick would be that you pit this against altruism—but instead you should be following something like Egan’s law (“It all adds up to normality”). There’s stuff we call altruistic in the real world, and also in the real world people have their own preferences. Egan’s law says that you should not take this to mean that the stuff we call altruistic is a lie and the world is actually strange. Instead, the stuff we call altruistic is an expression and natural consequence of peoples’ own values.
I’m a bit confused about what exactly you mean, and if I attribute to you a view that you do not hold, please correct me.
This kind of argument has always puzzled me. Your ethical principles are axioms, you define them to be correct, and this should compel you to believe that everybody else’s ethics, insofar as they violate those axioms, are wrong. This is where the “objectivity” comes from. It doesn’t matter what other people’s ethics are, my ethical principles are objectively the way they are, and that is all the objectivity I need.
Imagine there were a group of people who used a set of axioms for counting (Natural Numbers) that violated the Peano axioms in some straightforward way, such that they came to a different conclusion about how much 5+3 is. What do you think the significance of that should be for your mathematical understanding? My guess is “those people are wrong, I don’t care what they believe. I don’t want to needlessly offend them, but that doesn’t change anything about how I view the world, or how we should construct our technological devices.”
Likewise, if a deontologist says “Human challenge trials for covid are wrong, because [deontological reason]”, my reaction to that (I’m a utilitarian) is pretty much the same.
I understand that there are different kinds of people with vastly different preferences for what we should try to optimize for (or whether we should try to optimize for anything at all), but why should that stop me from being persuaded by arguments that honor the axioms I believe in, or why should I consider arguments that rely on axioms I reject?
I realize I’ll never be able to change a deontologist’s mind using utilitarian arguments, and that’s fine. When the longtermists use utilitarian arguments to argue in favor of longtermism, they assume that the recipient is already a utilitarian, or at least that he can be persuaded to become one.
i’m less firm than that, but basically: yes, “one correct” with “one objectively correct.”
Good points. People tend to confuse value pluralism or relativism with open-mindedness.
Basically this. I think that the moral anti-realists are right and there’s no single correct morality, including population ethics. (Corollary: There’s no wrong morals except from perspective or for signalling purposes.)
Surely Future-Tuesday-suffering-indifference is wrong?
Do you consider perspective something experiential or is it conceptual? If the former, is there a shared perspective of sentient life in some respects? E.g. “suffering feels bad”.
I consider it experiential, but I’m talking about a “true or objective moral values, and all others are false.” fashion.