Woah there. I think we might have a containment failure across an abstraction barrier.
Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.
“Utilitarianism is false because it is useful to believe it is false” is a confusion of levels, IMO.
Modelling moral propositions as facts that can be true or false is useful
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
I don’t see how this answers my question. And certainly not the original question
What experiences what you anticipate in a world where utilitarianism is true that you wouldn’t anticipate in a world where it is false?
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I meant model::useful, not memetic::useful.
I don’t see how this answers my question. And certainly not the original question
It doesn’t answer the original question. You asked in what sense it could be true or false, and I answered that it being “true” corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?
As for the original question, in a world where utilitarianism were “true”, I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.
Naturally, this correspondence between “is” facts and “ought” facts is artificial and no more or less justified than eg induction; we think it works.
Not explicitly, but most people tend to believe what their evolutionary and cultural adaptations tell them it’s useful to believe and don’t think too hard about whether it’s actually true.
Be careful with that word. You seem to be using it to refer to consequentialism, but “utilitarianism” usually refers to a much more specific theory that you would not want to endorse simply because it’s consequentialist.
I mean that the genie makes his decisions based on the consequences of his actions. I guess consequentialism is technically more accurate. According to Wikipedia, utilitarianism is a subset of it, but I’m not really sure what the difference is.
Ok. Yeah, “Consequentialism” or “VNM utilitarianism” is usually used for that concepts to distinguish from the moral theory that says you should make choices consistent with a utility function constructed by some linear aggregation of “welfare” or whatever across all agents.
It would be a tragedy to adopt Utilitarianism just because it is consequentialist.
Right, they are different. A creative rereading of my post could interpret it as talking about two concepts DanielLC might have meant by “utilitarianism”.
In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.
Casting morality as facts that can be true or false is a very convenient model.
I don’t think most people agree that useful = true.
Woah there. I think we might have a containment failure across an abstraction barrier.
Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.
“Utilitarianism is false because it is useful to believe it is false” is a confusion of levels, IMO.
Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.
I don’t see how this answers my question. And certainly not the original question
I meant model::useful, not memetic::useful.
It doesn’t answer the original question. You asked in what sense it could be true or false, and I answered that it being “true” corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?
As for the original question, in a world where utilitarianism were “true”, I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.
Naturally, this correspondence between “is” facts and “ought” facts is artificial and no more or less justified than eg induction; we think it works.
Not explicitly, but most people tend to believe what their evolutionary and cultural adaptations tell them it’s useful to believe and don’t think too hard about whether it’s actually true.
If we use deontology, we can control the genie. If we use utilitarianism, we can control the world. I’m more interested in the world than the genie.
Be careful with that word. You seem to be using it to refer to consequentialism, but “utilitarianism” usually refers to a much more specific theory that you would not want to endorse simply because it’s consequentialist.
?
What do you mean by utilitarianism?
I mean that the genie makes his decisions based on the consequences of his actions. I guess consequentialism is technically more accurate. According to Wikipedia, utilitarianism is a subset of it, but I’m not really sure what the difference is.
Ok. Yeah, “Consequentialism” or “VNM utilitarianism” is usually used for that concepts to distinguish from the moral theory that says you should make choices consistent with a utility function constructed by some linear aggregation of “welfare” or whatever across all agents.
It would be a tragedy to adopt Utilitarianism just because it is consequentialist.
I get consequentialism. It’s Utilitarianism that I don’t understand.
Minor nitpick: Consequentialism =/= VNM utilitarianism
Right, they are different. A creative rereading of my post could interpret it as talking about two concepts DanielLC might have meant by “utilitarianism”.