It’s strange to me that you ask this question. Since you moved on from objective morality, I hope you didn’t turn into a kind of moral relativist, in particular not accepting that one can be morally wrong.
Yes, if I am honest I do believe that an action being “morally wrong” (in the same observer-independent sense that 2+2=5 is “wrong”) is a misnomer. Actions can be either acceptable to me or not, but there is no objectivity, and if Vladimir were to announce that he likes fried infants for dinner I could only say that I disapprove, not that he is objectively wrong.
I am not sure whether the standard meaning of the term “moral relativism” describes the above position, but I am certainly no nihilist.
That’s preference, what you can mention, or at least use. The morality of which one can be mistaken runs deeper, and at least partially can be revealed by the right moral arguments: after hearing such arguments, you change your preference, either establishing it where there was none or reversing it. After the change, you see your previous position as having been mistaken, and it’s much less plausible to encounter another argument that would move your conclusion back. If I have the right model of a person, I can assert that he is morally mistaken in this sense.
Note that this notion of moral mistake doesn’t require there to be an argument that would actually convince that person in a reasonable time, or for there to be no argument that would launch the person down a moral death spiral that would obliterate one’s humane morality. Updating preferences on considering new arguments, or on new experience, are tools intended to show the shape of the concept that I’m trying to communicate.
after hearing such arguments, you change your preference, either establishing it where there was none or reversing it.
There do exist pieces of sensory data that have the ability to change a human’s preferences. For example, consider stockholm syndrome.
Some less extreme cases would include getting a human to spend time with some other group of humans that s/he dislikes, and finding that they are “not as bad as they seem”.
It is far from clear to me that these kinds of processes are indicative of some kind of moral truth, moral progress, or moral mistakes. It’s just our brain-architecture behaving the way it does. Unless you think that people who suffer from stockholm syndrome have discovered the moral truth of the matter (that certain terrorist organizations are justified in kidnapping, robbing baks, etc), or that people who buy Christmas presents instead of sending the money to Oxfam or SIAI are making “moral mistakes”.
Talking about a human being having preferences is always going to be an approximation that breaks down in certain cases, such as stockholm syndrome. Really, we are just meat computers that implement certain mathematically elegant ideas (such as having a a “goal”) in an imperfect manner. There exist certain pieces of sensory input that have the ability to rewrite our goals, but so what?
I’m not talking about breaking one’s head with a hammer, there are many subtle arguments that you’d recognize as enlightening, propagating preference from where it’s obvious to situations that you never connected to that preference, or evoking emotional response where you didn’t expect one. As I said, there obviously are changes that can’t be considered positive, or that are just arbitrary reversals, but there are also changes that you can intellectually recognize as improvements.
As I said, there obviously are changes that can’t be considered positive, or that are just arbitrary reversals, but there are also changes that you can intellectually recognize as improvements.
If there exist lots of arbitrary reversals, how do you know whether any particular change is an improvement? Unless you can provide some objective criterion… which we both agree you cannot.
For some examples of judgments about changes being positive or negative, take a look at Which Parts Are “Me”?. You can look forward to changes in yourself, including the changes in your emotional reactions in response to specific situations, that figure into your preference. When you are aware of such preferred changes, but they are still not implemented, that’s akrasia.
Now there are changes that only become apparent when you consider an external argument. Of course, it is you who considers the argument and decides what conclusion to draw from it, but the argument can come from elsewhere. For analogy, you may be easily able to check a solution to an equation, while unable to find it yourself.
The necessity for external moral arguments comes from you not being logically omniscient, their purpose is not in changing your preference directly.
Yes, if I am honest I do believe that an action being “morally wrong” (in the same observer-independent sense that 2+2=5 is “wrong”) is a misnomer. Actions can be either acceptable to me or not, but there is no objectivity, and if Vladimir were to announce that he likes fried infants for dinner I could only say that I disapprove, not that he is objectively wrong.
I am not sure whether the standard meaning of the term “moral relativism” describes the above position, but I am certainly no nihilist.
That’s preference, what you can mention, or at least use. The morality of which one can be mistaken runs deeper, and at least partially can be revealed by the right moral arguments: after hearing such arguments, you change your preference, either establishing it where there was none or reversing it. After the change, you see your previous position as having been mistaken, and it’s much less plausible to encounter another argument that would move your conclusion back. If I have the right model of a person, I can assert that he is morally mistaken in this sense.
Note that this notion of moral mistake doesn’t require there to be an argument that would actually convince that person in a reasonable time, or for there to be no argument that would launch the person down a moral death spiral that would obliterate one’s humane morality. Updating preferences on considering new arguments, or on new experience, are tools intended to show the shape of the concept that I’m trying to communicate.
There do exist pieces of sensory data that have the ability to change a human’s preferences. For example, consider stockholm syndrome.
Some less extreme cases would include getting a human to spend time with some other group of humans that s/he dislikes, and finding that they are “not as bad as they seem”.
It is far from clear to me that these kinds of processes are indicative of some kind of moral truth, moral progress, or moral mistakes. It’s just our brain-architecture behaving the way it does. Unless you think that people who suffer from stockholm syndrome have discovered the moral truth of the matter (that certain terrorist organizations are justified in kidnapping, robbing baks, etc), or that people who buy Christmas presents instead of sending the money to Oxfam or SIAI are making “moral mistakes”.
Talking about a human being having preferences is always going to be an approximation that breaks down in certain cases, such as stockholm syndrome. Really, we are just meat computers that implement certain mathematically elegant ideas (such as having a a “goal”) in an imperfect manner. There exist certain pieces of sensory input that have the ability to rewrite our goals, but so what?
I’m not talking about breaking one’s head with a hammer, there are many subtle arguments that you’d recognize as enlightening, propagating preference from where it’s obvious to situations that you never connected to that preference, or evoking emotional response where you didn’t expect one. As I said, there obviously are changes that can’t be considered positive, or that are just arbitrary reversals, but there are also changes that you can intellectually recognize as improvements.
If there exist lots of arbitrary reversals, how do you know whether any particular change is an improvement? Unless you can provide some objective criterion… which we both agree you cannot.
For some examples of judgments about changes being positive or negative, take a look at Which Parts Are “Me”?. You can look forward to changes in yourself, including the changes in your emotional reactions in response to specific situations, that figure into your preference. When you are aware of such preferred changes, but they are still not implemented, that’s akrasia.
Now there are changes that only become apparent when you consider an external argument. Of course, it is you who considers the argument and decides what conclusion to draw from it, but the argument can come from elsewhere. For analogy, you may be easily able to check a solution to an equation, while unable to find it yourself.
The necessity for external moral arguments comes from you not being logically omniscient, their purpose is not in changing your preference directly.