I recently had a discussion with a fellow rationalist about the strengths and weaknesses of utilitarianism. We realized that our disagreement was at least in part one of words:
When he said utilitarianism, I gather that he meant “trying to predict and calculate utility as precisely as possible for each decision you make”. (This is often infeasible in practice, for several reasons.)
When I say utilitarianism, I mean “having maximum utility as a terminal goal and using any convenient heuristics to approximate that terminal goal”.
The utilitarianism I describe is not necessarily rule utilitarianism, because I am not necessarily trying to establish a clear set of rules to follow. It is more of a “winging it”-utilitarianism: I try to guess which action will lead to greater utility if feasible, but often I fall back on guidelines like “tell the truth whenever reasonably convenient” and “only go by car if the alternative is too slow or painful”.
Does this make sense? Is this kind of thing distinct from the recognized classical kinds of utilitarianism? Do we have any terms for this sort of thing?
Criterion of rightness vs. decision procedure (also: multi-level utilitarianism)
Ideas similar to these were present to some degree among early utilitarians like Mill and Sidgwick, and the concepts were crystallized by later philosophers including Bales (1971) and Hare (1981).
This does seem to correspond to what I am trying to gesture at. Thanks!
I believe LW calls it “ethical injunctions”.
Thanks! But no, I think what I am getting at is the opposite of that:
Ethical injunctions, as I understand it, means setting up formal rules that override your utility calculus: Never lie, even if it looks like the optimal thing to do.
What I am talking about is setting up informal guidelines that can be overridden by my utility calculus: Never lie, unless it looks like the optimal thing to do.
It is not feasible for any human not to often fall back on heuristics, so to the extent that your behavior is accurately captured by your description here, you are sitting firmly in the reference class of act utilitarian humans.
But also, if I may (unless you’re already doing it), aim more for choosing your policy, not individual acts.
Vibe utilitarianism, the counterpart to vibe decision theory.
Thanks. How many people do you think would understand it if I called myself a “vibe utilitarian”?
Excluding me, probably zero, but Vibe Utilitarianism could be a good April Fools’ Day post for next year.