First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people’s intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.
(3.8) I think the best course of action would be to assign equal value to yourself and other people, which seems nicely in accord with there being no objective reason for a moral difference between you.
I take issue with this simply because it is not even remotely similar to the way anyone acts. I’d prefer it if we could just admit that we cared more about ourselves than about other people. Sure, utilitarianism says that the right thing to do would be to act like everyone, including oneself, is of equal value, and the world would be a better place if people actually acted this way. But no one does, and endorsing utilitarianism does not usually get them closer.
(5.31) Desire utilitarianism replaces preferences with desire. The differences are pretty technical and I don’t understand all of them, but desire utilitarians sure seem to think their system is better.
Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I’m not entirely clear on it either.
(7.4) For example, in coherent extrapolated volition utilitarianism, instead of respecting a specific racist’s current preference, we would abstract out the reflective equilibrium of that racist’s preferences if ey was well-informed and in philosophical balance. Presumably, at that point ey would no longer be a racist.
But what if he doesn’t? You are right that this situation is a problem for for simple preference utilitarianism that can be rectified by some other form of utilitarianism, but your suggested solution leads to a slippery slope towards justifying anything you want with CEV utilitarianism by claiming that everyone else’s moral preferences would be exactly what you want them to be in their CEV. I think the real issue here is that we respect some forms of preferences much more than others. Recall that pleasure utilitarianism (which would be the extreme case of giving 0 weight to all but one form of preference) gives the answer we like in this case.
First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people’s intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.
Very strongly disagree, and not just because I’m sceptical about both. The article is supposed about consequentialism, not Yvain’s particular moral system. It should explain why you should apply your moral analysis to certain data (state of the world) instead of others (“rights”), but it shouldn’t get involved in how your moral analysis exactly works.
Yvain correctly mentions that you can be a paperclip maximiser and still be a perfect consequentialist.
UDT and TDT are decision theories, not “moral systems”. To the extent that consequentialism necessarily relies on some kind of decision theory—as is clearly the case, since it advocates choosing the optimal actions to take based on their outcomes—a brief mention of CDT, UDT and TDT explaining their relevance to consequentialist ethics (see e.g. the issue of “rule utilitarianism” vs. “action utilitarianism”) would have been appropriate.
I deleted a moderate wall of text because I think I understand what you mean now. I agree that two consequentialists sharing the same moral/utility function, but adopting different decision theories, will have to make different choices.
However, I don’t think it would be a very good idea to talk about various DTs in the FAQ. That is: showing that “people’s intuition that they should not steal is not horribly misguided”, by offering them the option of a DT that supports a similar rule, doesn’t seem to me like a worthy goal for the document. IMO, people should embrace consequentialism because it makes sense—because it doesn’t rely on pies in the sky—not because it can be made to match their moral intuitions. If you use that approach, you could in the same way use the fat man trolley problem to support deontology.
I might be misinterpreting you or taking this too far, but what you suggest sounds to me like “Let’s write ‘Theft is wrong’ on the bottom line because that’s what is expected by readers and makes them comfortable, then let’s find a consequentialist process that will give that result so they will be happy” (note that it’s irrelevant whether that process happens to be correct or wrong). I think discouraging that type of reasoning is even more important than promoting consequentialism.
people should embrace consequentialism because it makes sense—because it doesn’t rely on pies in the sky—not because it can be made to match their moral intuitions.
The whole point of CEV, reflexive consistency and the meta-ethics sequence is that morality is based on our intuitions.
Yes, I personally think that’s awful. LessWrong rightly tendstopromote being sceptical of one’s mere intuitions in most contexts, and I think the same approach should be taken with morality (basically, this post on steroids).
(5.31) Desire utilitarianism replaces preferences with desire. The differences are pretty technical and I don’t understand all of them, but desire utilitarians sure seem to think their system is better.
Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I’m not entirely clear on it either.
Desire utilitarianism doesn’t replace preferences with desires, it replaces actions with desires. It’s not a consequentialist system; it’s actually a type of virtue ethics. When confronted with the “fat man” trolley problem, it concludes that there are good agents that would push the fat man and other good agents that wouldn’t. You should probably avoid mentioning it.
Some criticism that I hope you will find useful:
First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people’s intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.
I take issue with this simply because it is not even remotely similar to the way anyone acts. I’d prefer it if we could just admit that we cared more about ourselves than about other people. Sure, utilitarianism says that the right thing to do would be to act like everyone, including oneself, is of equal value, and the world would be a better place if people actually acted this way. But no one does, and endorsing utilitarianism does not usually get them closer.
Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I’m not entirely clear on it either.
But what if he doesn’t? You are right that this situation is a problem for for simple preference utilitarianism that can be rectified by some other form of utilitarianism, but your suggested solution leads to a slippery slope towards justifying anything you want with CEV utilitarianism by claiming that everyone else’s moral preferences would be exactly what you want them to be in their CEV. I think the real issue here is that we respect some forms of preferences much more than others. Recall that pleasure utilitarianism (which would be the extreme case of giving 0 weight to all but one form of preference) gives the answer we like in this case.
Very strongly disagree, and not just because I’m sceptical about both. The article is supposed about consequentialism, not Yvain’s particular moral system. It should explain why you should apply your moral analysis to certain data (state of the world) instead of others (“rights”), but it shouldn’t get involved in how your moral analysis exactly works.
Yvain correctly mentions that you can be a paperclip maximiser and still be a perfect consequentialist.
UDT and TDT are decision theories, not “moral systems”. To the extent that consequentialism necessarily relies on some kind of decision theory—as is clearly the case, since it advocates choosing the optimal actions to take based on their outcomes—a brief mention of CDT, UDT and TDT explaining their relevance to consequentialist ethics (see e.g. the issue of “rule utilitarianism” vs. “action utilitarianism”) would have been appropriate.
I deleted a moderate wall of text because I think I understand what you mean now. I agree that two consequentialists sharing the same moral/utility function, but adopting different decision theories, will have to make different choices.
However, I don’t think it would be a very good idea to talk about various DTs in the FAQ. That is: showing that “people’s intuition that they should not steal is not horribly misguided”, by offering them the option of a DT that supports a similar rule, doesn’t seem to me like a worthy goal for the document. IMO, people should embrace consequentialism because it makes sense—because it doesn’t rely on pies in the sky—not because it can be made to match their moral intuitions. If you use that approach, you could in the same way use the fat man trolley problem to support deontology.
I might be misinterpreting you or taking this too far, but what you suggest sounds to me like “Let’s write ‘Theft is wrong’ on the bottom line because that’s what is expected by readers and makes them comfortable, then let’s find a consequentialist process that will give that result so they will be happy” (note that it’s irrelevant whether that process happens to be correct or wrong). I think discouraging that type of reasoning is even more important than promoting consequentialism.
The whole point of CEV, reflexive consistency and the meta-ethics sequence is that morality is based on our intuitions.
Yes, I personally think that’s awful. LessWrong rightly tends to promote being sceptical of one’s mere intuitions in most contexts, and I think the same approach should be taken with morality (basically, this post on steroids).
If this is to be useful, it would have to read “that our intuitions are based on morality”.
Desire utilitarianism doesn’t replace preferences with desires, it replaces actions with desires. It’s not a consequentialist system; it’s actually a type of virtue ethics. When confronted with the “fat man” trolley problem, it concludes that there are good agents that would push the fat man and other good agents that wouldn’t. You should probably avoid mentioning it.
Thank you. That makes more sense than the last explanation of it I read.