Do you think it’s plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?
Like, say there’s a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also people have a lot of ideas about which sorts of behaviors take us closer to some sort of Pareto frontier- and maybe additionally there’s some sort of acausal bargain that a lot of different terminal values converge toward or something.
If you tried to maximize just one of those, you’d obviously run into conflicts with the others- and then if you used the same word to describe all of them, that might look like a paradox. How can something be clearly good and not good at the same time, you might wonder, not realizing that you’ve used the word to mean different things each time.
If I’m right about that, it could mean that when encountering the question of “what is most moral” in situations where different moral systems provide different answers, the best answer might not be so much “I can’t tell, since each option would commit me to things I think are immoral,” but rather “‘Morality’ isn’t a very well defined word; could you be more specific?”
Do you think it’s plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?
Like, say there’s a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also people have a lot of ideas about which sorts of behaviors take us closer to some sort of Pareto frontier- and maybe additionally there’s some sort of acausal bargain that a lot of different terminal values converge toward or something.
If you tried to maximize just one of those, you’d obviously run into conflicts with the others- and then if you used the same word to describe all of them, that might look like a paradox. How can something be clearly good and not good at the same time, you might wonder, not realizing that you’ve used the word to mean different things each time.
If I’m right about that, it could mean that when encountering the question of “what is most moral” in situations where different moral systems provide different answers, the best answer might not be so much “I can’t tell, since each option would commit me to things I think are immoral,” but rather “‘Morality’ isn’t a very well defined word; could you be more specific?”
That’s entirely plausible