I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility.
From here in an engineering sense, you’d have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people’s utility summed up is much larger than one’s own utility. Then to a very large extent your behavior will be driven by maximizing other people’s utility. In a naive design involving say 100 utilitarians, one would be “over-driving” the system by ~100 x, if each utilitarian was separately calculating everybody else’s utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility.
An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of “feedback,” each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules.
This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand’s egoism? Economics “homo economicus” rational utility maximizer?). It is a hybrid because you don’t ONLY have rules which maximize utility, and you don’t ONLY have maximizing individual utility as the moral rule.
I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility.
From here in an engineering sense, you’d have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people’s utility summed up is much larger than one’s own utility. Then to a very large extent your behavior will be driven by maximizing other people’s utility. In a naive design involving say 100 utilitarians, one would be “over-driving” the system by ~100 x, if each utilitarian was separately calculating everybody else’s utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility.
An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of “feedback,” each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules.
This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand’s egoism? Economics “homo economicus” rational utility maximizer?). It is a hybrid because you don’t ONLY have rules which maximize utility, and you don’t ONLY have maximizing individual utility as the moral rule.