Your preferences tell you how to aggregate the preferences of everyone else.
Edit: This post was downvoted to −1 when I came to it, so I thought I’d clarify. It’s since been voted back up to 0, but I just finished writing the clarification, so...
Your preferences are all that you care about (by definition). So you only care about the preferences of others to the extent that their preferences are a component of your own preferences. Now if you claim preference utilitarianism is true, you could be making one of two distinct claims:
“My preferences state that I should maximize the, suitably aggregated, preferences of all people/relevant agents,”
or
“The preferences of each human state that they should maximize the, suitably aggregated, preference of all people/relevant agents.”
In both cases, some “suitable aggregation” has to be chosen and which agents are relevant has to be chosen. The latter is actually a sub-problem of the former: set weights of zero for non-relevant agents in the aggregation. So how does the utilitarian aggregate? Well, that depends on what the utilitarian cares about, quite literally. What does the utilitarian’s preferences say? Maximize average utility? Total utility? Ultimately what the utilitarian should be maximizing comes back to her own preferences (or the collective preferences of humanity if the utilitarian is making the claim that our preferences are all the same). Going back to the utilitarian’s own utility function also (potentially) deals with things like utility monsters, how to deal with the preferences of the dead and the potentially-alive and so forth.
Your preferences tell you how to aggregate the preferences of everyone else.
Edit: This post was downvoted to −1 when I came to it, so I thought I’d clarify. It’s since been voted back up to 0, but I just finished writing the clarification, so...
Your preferences are all that you care about (by definition). So you only care about the preferences of others to the extent that their preferences are a component of your own preferences. Now if you claim preference utilitarianism is true, you could be making one of two distinct claims:
“My preferences state that I should maximize the, suitably aggregated, preferences of all people/relevant agents,” or
“The preferences of each human state that they should maximize the, suitably aggregated, preference of all people/relevant agents.”
In both cases, some “suitable aggregation” has to be chosen and which agents are relevant has to be chosen. The latter is actually a sub-problem of the former: set weights of zero for non-relevant agents in the aggregation. So how does the utilitarian aggregate? Well, that depends on what the utilitarian cares about, quite literally. What does the utilitarian’s preferences say? Maximize average utility? Total utility? Ultimately what the utilitarian should be maximizing comes back to her own preferences (or the collective preferences of humanity if the utilitarian is making the claim that our preferences are all the same). Going back to the utilitarian’s own utility function also (potentially) deals with things like utility monsters, how to deal with the preferences of the dead and the potentially-alive and so forth.
If my preferences are such that only what happens to me matters, I don’t think you can call me a “preference Utilitarian”.
Right, your preferences tell you whether you’re a utilitarian or not in the first place.