Would anyone be interested in having a conversation with me about morality? Either publicly[1] or privately.
I have some thoughts about morality but I don’t feel like they’re too refined. I’m interested in being challenged and working through these thoughts with someone who’s relatively knowledgeable. I could instead spend a bunch of time eg. digging through the Stanford Encyclopedia of Philosophy to refine my thoughts, but a) I’m not motivated enough to do that and b) I think it’d be easier and more fun to have a conversation with someone about it.
To start, I think you need to be clear about what it is you’re actually asking when you talk about morality. It’s important to have clear and specific questions. It’s important to avoid wrong questions. When we ask if something is moral, are we asking whether it is desirable? To you? To the average person? To the average educated person? To one’s Coherent Extrapolated Volition (CEV)? To some sort of average CEV? Are we asking whether it is behavior that we want to punish in order to achieve desirable outcomes for a group? Reward?
It seems to me that a lot of philosophizing about morality and moral frameworks is about fit. Like, we have intuitions about what is and isn’t moral in different scenarios, and we try to come up with general rules and frameworks that do a good job of “fitting” these intuitions.
A lot of times our intuitions end up being contradictory. When this happens, you could spend time examining it and arriving at some sort of new perspective that no longer has the contradiction. But maybe it’s ok to have these contradictions. And/or maybe it’s too much work to actually get rid of them all.
I feel like there’s something to be said for more “enlightened” feelings about morality. Like if you think that A is desirable but that preference is based on incorrect belief X, and if you believed ~X you’d instead prefer B, something seems “good” about moving from A to B.
I’m having trouble putting my finger on what I mean by the above bullet point though. Ultimately I don’t see a way to cross the is-ought gap. Maybe what I mean is that I personally prefer for my moral preferences to be based on things that are true, but I can’t argue that I ought to have such a preference.
As discussed in this dialogue, it seems to me that non-naive versions of moral philosophies end up being pretty similar to one another in practice. A naive deontologist might tell you not to lie to save a child from a murderer, but a non-naive deontologist would probably weigh the “don’t lie” rule against other rules and come to the conclusion that you should lie to save the child. I think in practice, things usually add up to normality.
I kinda feel like everything is consequentialism. Consider a virtue ethicist who says that what they ultimately care about is acting in a virtuous way. Well, isn’t that a consequence? Aren’t they saying that the consequence they care about is them/others acting virtuously, as opposed to eg. a utilitarian caring about consequences of involving utility?
Would anyone be interested in having a conversation with me about morality? Either publicly[1] or privately.
I have some thoughts about morality but I don’t feel like they’re too refined. I’m interested in being challenged and working through these thoughts with someone who’s relatively knowledgeable. I could instead spend a bunch of time eg. digging through the Stanford Encyclopedia of Philosophy to refine my thoughts, but a) I’m not motivated enough to do that and b) I think it’d be easier and more fun to have a conversation with someone about it.
To start, I think you need to be clear about what it is you’re actually asking when you talk about morality. It’s important to have clear and specific questions. It’s important to avoid wrong questions. When we ask if something is moral, are we asking whether it is desirable? To you? To the average person? To the average educated person? To one’s Coherent Extrapolated Volition (CEV)? To some sort of average CEV? Are we asking whether it is behavior that we want to punish in order to achieve desirable outcomes for a group? Reward?
It seems to me that a lot of philosophizing about morality and moral frameworks is about fit. Like, we have intuitions about what is and isn’t moral in different scenarios, and we try to come up with general rules and frameworks that do a good job of “fitting” these intuitions.
A lot of times our intuitions end up being contradictory. When this happens, you could spend time examining it and arriving at some sort of new perspective that no longer has the contradiction. But maybe it’s ok to have these contradictions. And/or maybe it’s too much work to actually get rid of them all.
I feel like there’s something to be said for more “enlightened” feelings about morality. Like if you think that A is desirable but that preference is based on incorrect belief X, and if you believed ~X you’d instead prefer B, something seems “good” about moving from A to B.
I’m having trouble putting my finger on what I mean by the above bullet point though. Ultimately I don’t see a way to cross the is-ought gap. Maybe what I mean is that I personally prefer for my moral preferences to be based on things that are true, but I can’t argue that I ought to have such a preference.
As discussed in this dialogue, it seems to me that non-naive versions of moral philosophies end up being pretty similar to one another in practice. A naive deontologist might tell you not to lie to save a child from a murderer, but a non-naive deontologist would probably weigh the “don’t lie” rule against other rules and come to the conclusion that you should lie to save the child. I think in practice, things usually add up to normality.
I kinda feel like everything is consequentialism. Consider a virtue ethicist who says that what they ultimately care about is acting in a virtuous way. Well, isn’t that a consequence? Aren’t they saying that the consequence they care about is them/others acting virtuously, as opposed to eg. a utilitarian caring about consequences of involving utility?
The feature’s been de-emphasized but you can initiate a dialog from another user’s profile page.