The point of rule utilitarianism isn’t only to save computational resources. It’s also that in any particular concrete situation we’re liable to have all sorts of non-moral motivations pulling at us, and those are liable to “leak” into whatever moral calculations we try to do and produce biased answers. Whereas if we work out ahead of time what our values are and turn them into sufficiently clear-cut rules (or procedures, or something), we don’t have that option. Hence “don’t kill anyone even if it’s the right thing to do”, as nyan_sandwich puts it—I think quoting someone else, maybe EY.
(A tangential remark, which you should feel free to ignore: The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone. I think this is wrong, for two reasons. Firstly, the values you encode into those clear-cut rules don’t have to be only of the sort generally called “moral”. You can build into them a strong preference for your own welfare over others’, or whatever. Secondly, you always have the option of working out what your moral principles say you should do and then doing something else; but the rule-utilitarian approach makes it harder to do that while fooling yourself into thinking you aren’t.)
The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone.
The point of rule utilitarianism isn’t only to save computational resources. It’s also that in any particular concrete situation we’re liable to have all sorts of non-moral motivations pulling at us, and those are liable to “leak” into whatever moral calculations we try to do and produce biased answers. Whereas if we work out ahead of time what our values are and turn them into sufficiently clear-cut rules (or procedures, or something), we don’t have that option. Hence “don’t kill anyone even if it’s the right thing to do”, as nyan_sandwich puts it—I think quoting someone else, maybe EY.
(A tangential remark, which you should feel free to ignore: The above may make it sound as if rule utilitarianism is only appropriate for those whose goal is to prioritize morality above absolutely everything else, and therefore for scarcely anyone. I think this is wrong, for two reasons. Firstly, the values you encode into those clear-cut rules don’t have to be only of the sort generally called “moral”. You can build into them a strong preference for your own welfare over others’, or whatever. Secondly, you always have the option of working out what your moral principles say you should do and then doing something else; but the rule-utilitarian approach makes it harder to do that while fooling yourself into thinking you aren’t.)
Isn’t that the awesomest goal? ,:-.