I don’t get it—why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?
All morality tells you to shut up and do what The Rules say.
Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I’d be okay with it, but the implicit meta-ethics isn’t at all convincing, and maybe that’s the part that bothers me.
Nevermind, misunderstood your initial comment, I think.
I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.
I now think you’re saying: we don’t know what is right, but if we assume pref-util, then we’ll lose part of our ability to figure it out, so we shouldn’t do that (yet).
Also, you’re saying that most people don’t understand morality better than us, so we shouldn’t take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they’re letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)
In which case I’d (naively) say, let pref-util redistribute the probability mass you’ve assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don’t give it more than that.
I don’t get it—why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?
Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I’d be okay with it, but the implicit meta-ethics isn’t at all convincing, and maybe that’s the part that bothers me.
Nevermind, misunderstood your initial comment, I think.
I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.
I now think you’re saying: we don’t know what is right, but if we assume pref-util, then we’ll lose part of our ability to figure it out, so we shouldn’t do that (yet).
Also, you’re saying that most people don’t understand morality better than us, so we shouldn’t take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they’re letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)
In which case I’d (naively) say, let pref-util redistribute the probability mass you’ve assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don’t give it more than that.