It’s not obvious that you’ve gained anything here. We can reduce to total utilitarianism—just assume that everyone’s utility is zero at the decision point. You still have the repugnant conclusion issue where you’re trying to decide whether to create more people or not based on summing utilities across populations.
I think there’s a definite difference. As soon as you treat utility as part of decision-making, rather than just an abstract thing-to-maximize, you are allowed to break the symmetry between existing people and nonexisting people.
If I want to take the action with the highest total delta-U, and some actions create new people, the most straightforward way to do it actually only takes the action with the highest delta-U according to currently-existing people. This is actually my preferred solution.
The second most straightforward way is to take the action with the highest delta-U according to the people who exist after you take the action. This is bad because it leads straight to killing off all humans and replacing them with easily satisfied homunculi. Or the not-as-repugnant repugnant conclusion, if all you’re allowed to do is create additional people.
Wouldn’t the highest delta-U be to modify yourself so that you maximize the utility of people as they are right now, and ignore future people even after they’re born?
Alice has utility function A. Bob will have utility function B, but he hasn’t been born yet.
You can make choices u or v, then once Bob is born, you get another choice between x and y.
A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0
B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2
If you can’t precommit, you’ll do u the first time, for 1 util under A, and y the second, for 2 util under A+B (compared to 1 util for x).
If you can precommit, then you know if you don’t, you’ll pick uy. Precommitting to ux gives you +1 util under A, and since you’re still operating under A, that’s what you’ll do.
While I’m at it, you can also get into prisoner’s dilemma with your future self, as follows:
A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0
B(u) = −1, B(v) = 2, B(x) = −2, B(y) = 1
Note that this gives:
A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1
Now, under A, you’d want u for 1 util, and once Bob is born, under A+B you’d want y for 1 util.
But if you instead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is better than uv both from Alice’s perspective and Alice+Bob’s perspective. Certainly that would be a better option.
Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.
You come to this robot before your example starts, and ask “Do you want to precommit to action vx, since that results in higher total utility?”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
“No, but you see, in one time step there’s this Bob guy who’ll pop into being, and if you add in his utilities from the beginning, by the end you’ll wish you’d precommitted.”
“Will wishing that I’d precommitted be the action that causes the biggest increase in utility of currently existing people?”
You shake your head. “No...”
“Then I can’t really see why I’d do such a thing.”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
I’d say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.
Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn’t arrange it, because by the time it happens they won’t be in a state to appreciate it?
My intended solution was that, if you check the utility of your constituents from creating more people, you’re explicitly not taking the utility of the new people into account. I’ll add a few sentences at the end of the article to try to clarify this.
Another thing I can say is that, if you assume that everyone’s utility is zero at the decision point, it’s not clear why you would see a utility gain from adding more people.
Isn’t this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn’t this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?
I suppose you could say that it’s equivalent to “total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function”.
(Under mere “total utilitarianism that only takes into account the utility of already extant people”, the government could wirehead its constituency.)
Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.
It’s not obvious that you’ve gained anything here. We can reduce to total utilitarianism—just assume that everyone’s utility is zero at the decision point. You still have the repugnant conclusion issue where you’re trying to decide whether to create more people or not based on summing utilities across populations.
I think there’s a definite difference. As soon as you treat utility as part of decision-making, rather than just an abstract thing-to-maximize, you are allowed to break the symmetry between existing people and nonexisting people.
If I want to take the action with the highest total delta-U, and some actions create new people, the most straightforward way to do it actually only takes the action with the highest delta-U according to currently-existing people. This is actually my preferred solution.
The second most straightforward way is to take the action with the highest delta-U according to the people who exist after you take the action. This is bad because it leads straight to killing off all humans and replacing them with easily satisfied homunculi. Or the not-as-repugnant repugnant conclusion, if all you’re allowed to do is create additional people.
Wouldn’t the highest delta-U be to modify yourself so that you maximize the utility of people as they are right now, and ignore future people even after they’re born?
Nope.
Why not?
Let me try making this more explicit.
Alice has utility function A. Bob will have utility function B, but he hasn’t been born yet.
You can make choices u or v, then once Bob is born, you get another choice between x and y.
A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0
B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2
If you can’t precommit, you’ll do u the first time, for 1 util under A, and y the second, for 2 util under A+B (compared to 1 util for x).
If you can precommit, then you know if you don’t, you’ll pick uy. Precommitting to ux gives you +1 util under A, and since you’re still operating under A, that’s what you’ll do.
While I’m at it, you can also get into prisoner’s dilemma with your future self, as follows:
A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0
B(u) = −1, B(v) = 2, B(x) = −2, B(y) = 1
Note that this gives:
A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1
Now, under A, you’d want u for 1 util, and once Bob is born, under A+B you’d want y for 1 util.
But if you instead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is better than uv both from Alice’s perspective and Alice+Bob’s perspective. Certainly that would be a better option.
Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.
You come to this robot before your example starts, and ask “Do you want to precommit to action vx, since that results in higher total utility?”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
“No, but you see, in one time step there’s this Bob guy who’ll pop into being, and if you add in his utilities from the beginning, by the end you’ll wish you’d precommitted.”
“Will wishing that I’d precommitted be the action that causes the biggest increase in utility of currently existing people?”
You shake your head. “No...”
“Then I can’t really see why I’d do such a thing.”
I’d say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.
Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn’t arrange it, because by the time it happens they won’t be in a state to appreciate it?
Ooooh. Okay, I see what you mean now—for some reason I’d interpreted you as saying almost the opposite.
Yup, I was wrong.
My intended solution was that, if you check the utility of your constituents from creating more people, you’re explicitly not taking the utility of the new people into account. I’ll add a few sentences at the end of the article to try to clarify this.
Another thing I can say is that, if you assume that everyone’s utility is zero at the decision point, it’s not clear why you would see a utility gain from adding more people.
Isn’t this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn’t this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?
I suppose you could say that it’s equivalent to “total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function”.
(Under mere “total utilitarianism that only takes into account the utility of already extant people”, the government could wirehead its constituency.)
Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.