I am not sure the possibility of an objective basis is taken seriously enough.
Where I mean relativism in the sense of moral and aesthetic judgments being based on subjective personal taste and contingent learned behaviour. Moral and aesthetic judgements still exist and have meaning.
Yes but there is a spectrum of meaning. There is the ephemeral meaning of hedonistic pleasure or satiation (I want a doughnut). But we sacrifice shallow for deeper meaning; unrestricted sex for love, intimacy, trust and family. Our doughnut for health and better appearance. And then we create values that span wider spatial and temporal areas. For something to be meaningful it will have to matter (be a positive force) in an as much as possible wider spatial area as well as extend (as a positive force) into the future.
Moral relativism, if properly followed to its conclusion, equalises good and evil and renders the term ‘positive’ void. And then:
but now we can explain this similarity in terms of the origin of human values by evolution.
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
I think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
Human morals, human preferences, and human ability to work to satisfy those morals and preferences on large scales, are all quite successful from an evolutionary perspective, and make use of elements seen other places in the animal kingdom. There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
Let me give an analogy for why I think this doesn’t remove meaning from things (it will also be helpful if you’ve read the article Fake Reductionism from the archives). We like to drink water, and think it’s wet. Then we learn that water is made of molecules, which are made of atoms, etc, and in fact this idea of “water” is not fundamental within the laws of physics. Does this remove meaning from wetness, and from thirst?
There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
Does this remove meaning from wetness, and from thirst?
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old “ought from an is” issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an “ought from an is” without a value presupposition.
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin—would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
Would it be OK to enslave half of humanity...
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
When people say “what is right?, I always think of this as being like “by what standard would we act, if we could choose standards for ourselves?” rather than like “what does the external rightness-object say?”
We can think as if we’re consulting the rightness-object when working cooperatively with other humans—it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it’s not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality?
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words ‘understand’ and ‘value’ which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...
I am not sure the possibility of an objective basis is taken seriously enough.
Yes but there is a spectrum of meaning. There is the ephemeral meaning of hedonistic pleasure or satiation (I want a doughnut). But we sacrifice shallow for deeper meaning; unrestricted sex for love, intimacy, trust and family. Our doughnut for health and better appearance. And then we create values that span wider spatial and temporal areas. For something to be meaningful it will have to matter (be a positive force) in an as much as possible wider spatial area as well as extend (as a positive force) into the future.
Moral relativism, if properly followed to its conclusion, equalises good and evil and renders the term ‘positive’ void. And then:
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
I think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
Human morals, human preferences, and human ability to work to satisfy those morals and preferences on large scales, are all quite successful from an evolutionary perspective, and make use of elements seen other places in the animal kingdom. There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
Let me give an analogy for why I think this doesn’t remove meaning from things (it will also be helpful if you’ve read the article Fake Reductionism from the archives). We like to drink water, and think it’s wet. Then we learn that water is made of molecules, which are made of atoms, etc, and in fact this idea of “water” is not fundamental within the laws of physics. Does this remove meaning from wetness, and from thirst?
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old “ought from an is” issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an “ought from an is” without a value presupposition.
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin—would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
When people say “what is right?, I always think of this as being like “by what standard would we act, if we could choose standards for ourselves?” rather than like “what does the external rightness-object say?”
We can think as if we’re consulting the rightness-object when working cooperatively with other humans—it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it’s not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words ‘understand’ and ‘value’ which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...