As far as I’ve read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that’s what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties.
I agree that the term tends to be used too broadly around here—probably because the term sounds like it points to something along the lines of “an ethic based on evaluating a utility function against options”, which is actually closer to a working definition of consequentialism. It’s not a word that’s especially well defined, though, even in philosophy.
As far as I’ve read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that’s what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties.
I agree that the term tends to be used too broadly around here—probably because the term sounds like it points to something along the lines of “an ethic based on evaluating a utility function against options”, which is actually closer to a working definition of consequentialism. It’s not a word that’s especially well defined, though, even in philosophy.