All of your examples dealing with morality take a consequentialist stance with regard to ethics. I don’t think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions. So, I don’t think you are saying anything fundamentally new here by applying science to pairs of ethical maxims rather than to one at a time.
But a lot of people are not consequentialists—they are deontologists (i.e. believers in moral duties). That duties may be in conflict on occasion has also been known for a long time—I’m told this theme was common in Greek tragedy. I’m curious as to whether and how your methodology can find a toehold for science in a duty-based account of morality.
For example:
Everyone has a duty not to masturbate.
Every married person has a duty not to commit adultery.
Where is the conflict, even if science is brought in?
But a lot of people are not consequentialists—they are deontologists (i.e. believers in moral duties).
Actually, my impression is that the overwhelming majority of people are practitioners of folk virtue ethics in their own personal lives. (This typically applies to the self-professed consequentialists and deontologists too, including those who have made whole academic careers out of advocating these ideas in the abstract.) I expanded on this thesis once in a long and somewhat rambling comment, which I should rewrite in a more systematic way sometime.
It mostly boils down to maintaining and enforcing an elaborate system of tacit-agreement focal points in one’s interactions with other people, and priding oneself on being the sort of person who does this with consistent high skill, which is one of the basic elements of what the ancients called “virtue.” (Of course, when it comes to views that don’t have practical relevance for one’s personal life, it’s mostly about signaling games instead.)
I don’t think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions.
Indeed. Put differently, science bears upon instrumental issues but not terminal ones. What would falsify this idea would be an example of new factual knowledge changing someone’s perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
Neither Harris nor Academian seems to have provided such an example, and I’m not sure one exists. Following are two examples of a slightly different type that also seem to fail.
Alice thinks homosexuality is immoral because it’s unnatural. Bob tells her that there are cases of animal homosexuality. Alice decides that it’s not unnatural and that it isn’t wrong. (But isn’t being natural the end, with sexuality being merely a means, such that what we see here is still just a revaluation of instruments?)
Alice thinks it’s wrong to X until Bob tells her about an evopsych theory under which condemning X was adaptive before people invented farming. Condemning X is not obviously adaptive or maladaptive today. Alice stops condemning X because she thinks her disapproval of it was just a mind trick and she’d rather not expend effort condeming things that aren’t “really wrong.” (Again, the end here is some sort of mental energy economy, while the instrument is her moral belief set?)
That said, I’m not too comfortable with the idea that new knowledge has no effect on terminal values. This is because the other contenders for influence on terminal values (e.g. ancient instinct) seem decidedly less open to my control.
P.S. I’m rather new here, and have not finished the sequences. If I’ve missed something that’s already been covered, I’d love a point in the correct direction.
...science bears upon instrumental issues but not terminal ones.
For what I consider non-obvious reasons, I disagree. As you say (and thanks for pointing this out explicitly),
What would falsify this idea would be an example of new factual knowledge changing someone’s perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal value that I stopped considering terminal upon realizing something factual about it. I’m guessing LucasSloan and Jayson_Virissimo are referring to similar experiences in thesecomments.
You could argue that it changing means “It wasn’t really terminal to begin with”. However, the separation of a given utility function into values and balancing operations is non-unique, so my current opinion is that the terminal/instrumental distinction is at best somewhat nominal. In other words, the change that it stopped feeling terminal may be the only sort of change worth calling “not being terminal anymore”.
So I think you should more precisely demand an example of a person’s utility function changing in response to knowledge. On the day of the factual realization I mentioned above, while it’s clear that my description of my utility function to myself and others changed, it’s not clear to me that the function itself changed much right away. But it does seem to me that over time, expressing it differently has gradually changed the function, though I can’t be sure.
I only hinted at all this when I added
First of all, you might change your morals in response to them not relating to each other in the way you expected. Ideas parse differently when they relate differently. “Teachers should be allowed to physically punish their students” might never feel the same to you after you find out it causes adult violence.
When I first made the utility function/description distinction, it was for abstract reasons (I was making a toy model of human morality for another purpose), and I didn’t quite notice the implications it would have for how people think of moral progress. Now in response to your demand for explicit examples, I’m a lot more motivated to sort this out. Thanks!
I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal value that I stopped considering terminal upon realizing something factual about it.
Changing terminal values in response to learning is not only possible, but downright normal. We pursue one goal or another and find the life thus lived to be good or bad in our experience. We learn more about the goal-state or goal object, and it deepens or loses its attraction.
This needn’t mean that “the true terminal value” is pleasure or other positive emotion, even though happiness does play a role in such learning. Most people reject wire-heading: clearly pleasure is not their overarching “true terminal value.”
This needn’t mean that “the true terminal value” is pleasure or other positive emotion
True, it wouldn’t mean that pleasure was the actual terminal value, and the fact that many people reject wire-heading is evidence that pleasure is indeed not a terminal value for those people.
However, what role could “happiness” or feelings of well-being play, if not as true terminal values, if it’s in response to those feelings that people change (what they thought were) their terminal values?
All of your examples dealing with morality take a consequentialist stance with regard to ethics. I don’t think that anyone has ever doubted that science might be relevant in computing the expected consequences of actions. So, I don’t think you are saying anything fundamentally new here by applying science to pairs of ethical maxims rather than to one at a time.
But a lot of people are not consequentialists—they are deontologists (i.e. believers in moral duties). That duties may be in conflict on occasion has also been known for a long time—I’m told this theme was common in Greek tragedy. I’m curious as to whether and how your methodology can find a toehold for science in a duty-based account of morality.
For example:
Everyone has a duty not to masturbate.
Every married person has a duty not to commit adultery.
Where is the conflict, even if science is brought in?
Perplexed:
Actually, my impression is that the overwhelming majority of people are practitioners of folk virtue ethics in their own personal lives. (This typically applies to the self-professed consequentialists and deontologists too, including those who have made whole academic careers out of advocating these ideas in the abstract.) I expanded on this thesis once in a long and somewhat rambling comment, which I should rewrite in a more systematic way sometime.
It mostly boils down to maintaining and enforcing an elaborate system of tacit-agreement focal points in one’s interactions with other people, and priding oneself on being the sort of person who does this with consistent high skill, which is one of the basic elements of what the ancients called “virtue.” (Of course, when it comes to views that don’t have practical relevance for one’s personal life, it’s mostly about signaling games instead.)
Indeed. Put differently, science bears upon instrumental issues but not terminal ones. What would falsify this idea would be an example of new factual knowledge changing someone’s perception of the moral value of some action, with this change persisting even after adjusting for the effect the knowledge has on the instrumental value of the action.
Neither Harris nor Academian seems to have provided such an example, and I’m not sure one exists. Following are two examples of a slightly different type that also seem to fail.
Alice thinks homosexuality is immoral because it’s unnatural. Bob tells her that there are cases of animal homosexuality. Alice decides that it’s not unnatural and that it isn’t wrong. (But isn’t being natural the end, with sexuality being merely a means, such that what we see here is still just a revaluation of instruments?)
Alice thinks it’s wrong to X until Bob tells her about an evopsych theory under which condemning X was adaptive before people invented farming. Condemning X is not obviously adaptive or maladaptive today. Alice stops condemning X because she thinks her disapproval of it was just a mind trick and she’d rather not expend effort condeming things that aren’t “really wrong.” (Again, the end here is some sort of mental energy economy, while the instrument is her moral belief set?)
That said, I’m not too comfortable with the idea that new knowledge has no effect on terminal values. This is because the other contenders for influence on terminal values (e.g. ancient instinct) seem decidedly less open to my control.
P.S. I’m rather new here, and have not finished the sequences. If I’ve missed something that’s already been covered, I’d love a point in the correct direction.
For what I consider non-obvious reasons, I disagree. As you say (and thanks for pointing this out explicitly),
I have undergone changes in values that I would describe in this way. Namely, I had something I considered a terminal value that I stopped considering terminal upon realizing something factual about it. I’m guessing LucasSloan and Jayson_Virissimo are referring to similar experiences in these comments.
You could argue that it changing means “It wasn’t really terminal to begin with”. However, the separation of a given utility function into values and balancing operations is non-unique, so my current opinion is that the terminal/instrumental distinction is at best somewhat nominal. In other words, the change that it stopped feeling terminal may be the only sort of change worth calling “not being terminal anymore”.
So I think you should more precisely demand an example of a person’s utility function changing in response to knowledge. On the day of the factual realization I mentioned above, while it’s clear that my description of my utility function to myself and others changed, it’s not clear to me that the function itself changed much right away. But it does seem to me that over time, expressing it differently has gradually changed the function, though I can’t be sure.
I only hinted at all this when I added
When I first made the utility function/description distinction, it was for abstract reasons (I was making a toy model of human morality for another purpose), and I didn’t quite notice the implications it would have for how people think of moral progress. Now in response to your demand for explicit examples, I’m a lot more motivated to sort this out. Thanks!
Changing terminal values in response to learning is not only possible, but downright normal. We pursue one goal or another and find the life thus lived to be good or bad in our experience. We learn more about the goal-state or goal object, and it deepens or loses its attraction.
This needn’t mean that “the true terminal value” is pleasure or other positive emotion, even though happiness does play a role in such learning. Most people reject wire-heading: clearly pleasure is not their overarching “true terminal value.”
True, it wouldn’t mean that pleasure was the actual terminal value, and the fact that many people reject wire-heading is evidence that pleasure is indeed not a terminal value for those people.
However, what role could “happiness” or feelings of well-being play, if not as true terminal values, if it’s in response to those feelings that people change (what they thought were) their terminal values?
Do what results in the smallest amount of duty-breaking.