If you think these are merely six of countless hypotheses, do you think you could come up with, say, two more?
Two more possible positions:
There is a great variety of possible consistent preferences that intelligent beings can have, and there are no facts about what one should value that apply to all possible intelligent beings. However, there are still facts about rationality that do apply to all intelligent beings. Also, if you narrow the scope from “intelligent beings” to “humans”, most humans , when consistent, share similar preferences, and there exist facts about what they should value. (So, 4 or 5 for intelligent beings in general, but 1 for humans.)
Your first suggestion isn’t an additional alternative, it’s just a subdivision within 4 or 5.
I’m not sure I understand the second one. Are you trying to draw the distinction between consequentialism and non-consequentialist moralities? If so, I think that is usually considered to be a distinction in normative ethics rather than metaethics. Although I repeatedly use “preferences” and “values” in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Your first suggestion isn’t an additional alternative, it’s just a subdivision within 4 or 5.
Perhaps, but it seems like there’s a substantive difference between those who believe there are no facts about what all intelligent beings should value and between those who believe that in addition to that, there are also no facts about what humans should value.
Although I repeatedly use “preferences” and “values” in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Sure. 1. Most intelligent beings in the multiverse end up sharing similar moralities. This came about because there are facts about what morals one should have. For example, suppose there are facts about what preferences one should have along with facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.
Two more possible positions:
There is a great variety of possible consistent preferences that intelligent beings can have, and there are no facts about what one should value that apply to all possible intelligent beings. However, there are still facts about rationality that do apply to all intelligent beings. Also, if you narrow the scope from “intelligent beings” to “humans”, most humans , when consistent, share similar preferences, and there exist facts about what they should value. (So, 4 or 5 for intelligent beings in general, but 1 for humans.)
Morality has nothing to do with value.
Your first suggestion isn’t an additional alternative, it’s just a subdivision within 4 or 5.
I’m not sure I understand the second one. Are you trying to draw the distinction between consequentialism and non-consequentialist moralities? If so, I think that is usually considered to be a distinction in normative ethics rather than metaethics. Although I repeatedly use “preferences” and “values” in this post, that was just for convenience rather than trying to imply that morality must have something to do with values.
Perhaps, but it seems like there’s a substantive difference between those who believe there are no facts about what all intelligent beings should value and between those who believe that in addition to that, there are also no facts about what humans should value.
Could you give an example of one of these positions put in terms that would be inclusive of both consequentialist and non-consequentialist ethical theories?
Sure. 1. Most intelligent beings in the multiverse end up sharing similar moralities. This came about because there are facts about what morals one should have. For example, suppose there are facts about what preferences one should have along with facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.