Your first suggestion isn’t an additional alternative, it’s just a subdivision within 4 or 5.
I’m not sure I understand the second one. Are you trying to draw the distinction between consequentialism and non-consequentialist moralities? If so, I think that is usually considered to be a distinction in normative ethics rather than metaethics. Although I repeatedly use “preferences” and “values” in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Your first suggestion isn’t an additional alternative, it’s just a subdivision within 4 or 5.
Perhaps, but it seems like there’s a substantive difference between those who believe there are no facts about what all intelligent beings should value and between those who believe that in addition to that, there are also no facts about what humans should value.
Although I repeatedly use “preferences” and “values” in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Sure. 1. Most intelligent beings in the multiverse end up sharing similar moralities. This came about because there are facts about what morals one should have. For example, suppose there are facts about what preferences one should have along with facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.
Your first suggestion isn’t an additional alternative, it’s just a subdivision within 4 or 5.
I’m not sure I understand the second one. Are you trying to draw the distinction between consequentialism and non-consequentialist moralities? If so, I think that is usually considered to be a distinction in normative ethics rather than metaethics. Although I repeatedly use “preferences” and “values” in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Perhaps, but it seems like there’s a substantive difference between those who believe there are no facts about what all intelligent beings should value and between those who believe that in addition to that, there are also no facts about what humans should value.
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Sure. 1. Most intelligent beings in the multiverse end up sharing similar moralities. This came about because there are facts about what morals one should have. For example, suppose there are facts about what preferences one should have along with facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.