If you’re unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that
Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good. There are many ways to spell out this general claim. One thing to note is that the theory is a form of consequentialism: the right action is understood entirely in terms of consequences produced. What distinguishes utilitarianism from egoism has to do with the scope of the relevant consequences. On the utilitarian view one ought to maximize the overall good — that is, consider the good of others as well as one’s own good.
The Classical Utilitarians, Jeremy Bentham and John Stuart Mill, identified the good with pleasure, so, like Epicurus, were hedonists about value. They also held that we ought to maximize the good, that is, bring about ‘the greatest amount of good for the greatest number’.
Utilitarianism is also distinguished by impartiality and agent-neutrality. Everyone’s happiness counts the same. When one maximizes the good, it is the good impartially considered. My good counts for no more than anyone else’s good. Further, the reason I have to promote the overall good is the same reason anyone else has to so promote the good. It is not peculiar to me.
Note the last paragraph in particular. Utilitarianism is agent-neutral: while it does take your utility function into account, it gives it no more weight than anybody else’s.
The “general utilitarianism” that you mention is mostly just “having a utility function”, not “utilitarianism”—utility functions might in principle be used to implement ethical theories quite different from utilitarianism. This is a somewhat common confusion on LW (one which I’ve been guilty of myself, at times). I think it has to do with the Sequences sometimes conflating the two.
Since classic utilitarianism reduces all morally relevant factors (Kagan 1998, 17–22) to consequences, it might appear simple. However, classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:
Consequentialism = whether an act is morally right depends only on consequences (as opposed to the circumstances or the intrinsic nature of the act or anything that happens before the act).
Actual Consequentialism = whether an act is morally right depends only on the actual consequences (as opposed to foreseen, foreseeable, intended, or likely consequences).
Direct Consequentialism = whether an act is morally right depends only on the consequences of that act itself (as opposed to the consequences of the agent’s motive, of a rule or practice that covers other acts of the same kind, and so on).
Evaluative Consequentialism = moral rightness depends only on the value of the consequences (as opposed to non-evaluative features of the consequences).
Hedonism = the value of the consequences depends only on the pleasures and pains in the consequences (as opposed to other goods, such as freedom, knowledge, life, and so on).
Maximizing Consequentialism = moral rightness depends only on which consequences are best (as opposed to merely satisfactory or an improvement over the status quo).
Aggregative Consequentialism = which consequences are best is some function of the values of parts of those consequences (as opposed to rankings of whole worlds or sets of consequences).
Total Consequentialism = moral rightness depends only on the total net good in the consequences (as opposed to the average net good per person).
Universal Consequentialism = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual’s society, present people, or any other limited group).
Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).
Agent-neutrality = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).
If you’re unsure of a question of philosophy, the Stanford Encyclopedia of Philosophy is usually the best place to consult first. Its history of utilitarianism article says that
Note the last paragraph in particular. Utilitarianism is agent-neutral: while it does take your utility function into account, it gives it no more weight than anybody else’s.
The “general utilitarianism” that you mention is mostly just “having a utility function”, not “utilitarianism”—utility functions might in principle be used to implement ethical theories quite different from utilitarianism. This is a somewhat common confusion on LW (one which I’ve been guilty of myself, at times). I think it has to do with the Sequences sometimes conflating the two.
EDIT: Also, in SEP’s Consequentialism article: