1) Yes how we measure utility is always an issue. Most papers I’ve read don’t address it, working off the arguably fair assumption that somehow there is greater and less utility, and that anything in real life is just an approximation but you can still shoot for the former. Ideally we would just ask trustworthy people how happy or unhappy they are, or something similar. In practice and for prescribing behavior though I think we use the popular components approach, assuming most people like food and hate being tortured.
2) I’m slightly confused by this. Are you talking about bringing a large group of people into existence, with varying utilities? For simplicity I was discussing ideal theoretical cases, such as one child, or yes a new population all of roughly the same utility.
4) Yes that’s essentially my point, though I haven’t (I think) yet suggested how realization of these “decision-changed metrics” alters our decisions about potential people. But perhaps you meant simply that the wellbeing of someone who will exist should affect what we do.
4a) I would say that we should treat all people’s utility equally once they come into being, which I think agrees with what you said. The last line about anti-Amish bias seems to run counter to that idea however.
5) Before a normative rule came to me, I was going to end this post with “lacking any prescriptive power however, we might default to total or average utilitarianism”. Regardless, I’ve tried to keep this post merely descriptive. Though the rule I came up with is similar to average utilitarianism in ways, average utilitarianism has consequences as well I’m not happy with. For example, if there were 20 people with extremely high utility and 1000 with utility half that but still very good lives, as long as those 20 people didn’t mind the slaughter of the thousand, the average approach seems to advocate killing the 1000 to bring up the average.
2) I’m just agreeing that the “perhaps interpretable” is “definitely not the same as, except under certain assumptions”, which you were well aware of.
4a) I had one too many negatives (bad edit). I was indeed making an anti-Amish suggestion. That is, to the extent that some group of people are committed to a massive future population, those that are personally intending to bring about a lower population level shouldn’t necessarily be constrained in their decision making in favor of the profligate reproducers’ spawn.
It seems odd to me to value the utlity of the new Amish masses less than others’, as no one is allowed to choose why they were brought into existence, or if. If we maintain a belief in an essential equality of moral worth between people, I think we would be constrained by the reproducer’s offspring. Of course, I may not like that, but that’s an issue to be brought up with the current Amish-spawners.
That’s a reasonable suggestion. I certainly haven’t complained about the teeming Amish masses before, so if I really care, I ought to first try to exert some influence now.
Thanks for the helpful feedback!
1) Yes how we measure utility is always an issue. Most papers I’ve read don’t address it, working off the arguably fair assumption that somehow there is greater and less utility, and that anything in real life is just an approximation but you can still shoot for the former. Ideally we would just ask trustworthy people how happy or unhappy they are, or something similar. In practice and for prescribing behavior though I think we use the popular components approach, assuming most people like food and hate being tortured.
2) I’m slightly confused by this. Are you talking about bringing a large group of people into existence, with varying utilities? For simplicity I was discussing ideal theoretical cases, such as one child, or yes a new population all of roughly the same utility.
4) Yes that’s essentially my point, though I haven’t (I think) yet suggested how realization of these “decision-changed metrics” alters our decisions about potential people. But perhaps you meant simply that the wellbeing of someone who will exist should affect what we do.
4a) I would say that we should treat all people’s utility equally once they come into being, which I think agrees with what you said. The last line about anti-Amish bias seems to run counter to that idea however.
5) Before a normative rule came to me, I was going to end this post with “lacking any prescriptive power however, we might default to total or average utilitarianism”. Regardless, I’ve tried to keep this post merely descriptive. Though the rule I came up with is similar to average utilitarianism in ways, average utilitarianism has consequences as well I’m not happy with. For example, if there were 20 people with extremely high utility and 1000 with utility half that but still very good lives, as long as those 20 people didn’t mind the slaughter of the thousand, the average approach seems to advocate killing the 1000 to bring up the average.
1) I wish we could do better.
2) I’m just agreeing that the “perhaps interpretable” is “definitely not the same as, except under certain assumptions”, which you were well aware of.
4a) I had one too many negatives (bad edit). I was indeed making an anti-Amish suggestion. That is, to the extent that some group of people are committed to a massive future population, those that are personally intending to bring about a lower population level shouldn’t necessarily be constrained in their decision making in favor of the profligate reproducers’ spawn.
5) Please do continue with another post, then
It seems odd to me to value the utlity of the new Amish masses less than others’, as no one is allowed to choose why they were brought into existence, or if. If we maintain a belief in an essential equality of moral worth between people, I think we would be constrained by the reproducer’s offspring. Of course, I may not like that, but that’s an issue to be brought up with the current Amish-spawners.
That’s a reasonable suggestion. I certainly haven’t complained about the teeming Amish masses before, so if I really care, I ought to first try to exert some influence now.