I find the majority of intellectually leaning people tend towards embracing moral relativism and aesthetic relativism. But even those people act morally and arrive at similar base aesthetic judgements. The pattern indeed seems (to me) to be that, in both morality and aesthetics, there are basic truths and then there is a huge amount of cultural and personal variation. But the existence of variation does not negate the foundational truths. Here are a couple of examples of how this performative contradiction is an indication that these foundational truths are at the very least believed in by humans no matter what they are saying:
People in general (including moral relativists) have a good conception of good and evil, believe in its existence and act for the good. There is also an intuition of why that is better which is related to concepts such as creation, destruction, harmony etc. and an underlying choice of moving towards creation and improvement.
I am a musician and have had extensive exposure to experimental and avant garde music inside academia. There is a kind of tendency in modern art to say that anything goes but I feel that this is hypocritical. I had discussions with people insisting on everything being subjective and how harmony does not really exist but I believe it is telling that they would never, ever put wrong (for lack of a better word) music on for their enjoyment.
Would love to hear your thoughts on that especially if you consider yourself a moral relativist.
Seems to me that most people understand the difference between good and evil, and most people prefer good to evil, but we have a fashion where good is considered low-status, so many people are ashamed to admit their preferences publicly.
It’s probably some mix of signalling and counter-signalling. On the signalling side, powerful people are often evil, or at least indifferent towards good and evil. By pretending that I don’t care about good, I am making myself appear more powerful. On the counter-signalling side, any (morally sane) idiot can say that good is better that evil; I display my sophistication by expressing a different opinion.
but we have a fashion where good is considered low-status,
I do not think that is true. There are exceptions of course but in general most people would say that they prefer someone that is truthful to a lier, honest to deceitful etc. and also despise malevolence.
powerful people are often evil or at least indifferent towards good and evil
That is also not really true as far as I can tell. Again, there are exceptions, but the idea that powerful people are there because they oppressed the less powerful seems to be a residue of Marxist ideology. Apparently studies have found that in western societies successful people tend to be high in IQ and trait conscientiousness. This just means that people are powerful and successful because they are intelligent and hard working..
Seems to me that most people understand the difference between good and evil
When you say understand you mean ‘believe in’ or ‘intuitively understand’ I assume? Cause rational assessment does not conclude so, as far as I can tell.
Thanks for sharing. I must admit, I am not convinced by the methods of measurement of such complex mental states but I do not properly understand the science either so.. Do share the result from stackexchange if you get an answer (can’t find how to ‘watch’ the question).
the idea that powerful people are there because they oppressed the less powerful seems to be a residue of Marxist ideology
The reality may be country-specific, or culture-specific. Whether more powerful people are more evil may be different in America, in Russia, in Saudi Arabia, etc.
And for status purposes, it’s actually the perception that matters. If people believe that X correlates with Y, even if it is not true, displaying X is the way to signal Y.
in western societies successful people tend to be high in IQ and trait conscientiousness
Yep, in “western societies”. I would say this could actually be the characteristic of “western societies”. By which I mean, for the rest of the world this sounds incredibly naive (or a shameless hypocrisy). I believe it’s actually true, statistically, for the record, but that came as a result of me interacting with people from western societies and noticing the cultural differences.
Also, notice the semantic shifts (“powerful” → “successful”; “good” → “high in IQ and trait conscientiousness”). Perhaps a typical entrepreneur is smart, conscientious, and good (or at least, not worse than an average citizen), that seems likely. What about a typical oligarch? You know, usually a former member of some secret service, who made his career on torturing innocent people, and who remains well connected after end of his active service, which probably means he still participates on some activities, most likely criminal. I would still say higher IQ and conscientiousness help here, but seems like a safe bet than most of these people are quite evil in the conventional meaning of the word.
And for status purposes, it’s actually the perception that matters. If people believe that X correlates with Y, even if it is not true, displaying X is the way to signal Y.
Yes, you are right!
I believe it’s actually true, statistically, for the record, but that came as a result of me interacting with people from western societies and noticing the cultural differences.
I would still say higher IQ and conscientiousness help here, but seems like a safe bet than most of these people are quite evil in the conventional meaning of the word.
These are good points. And a very interesting observation about the semantic shifts. On further thought I would say that in a corrupt society the evil will be powerful while in a fair and good society the good. And of course in reality most cultures are a mixture. At the moment I believe it is impossible to be certain about what our (or any other) society is really like cause the interpretations are conflicted and the sources quality ambiguous. Plus intellectually we can not define the good in any absolute sense (though we kind of know its characteristics in some sense). In any case let’s avoid a political discussion, or even one of specific moral particulars for now since the point of the thread is more general.
One thing I would like to bring up is that, to me, it seems that it is not a matter of signalling to others (though that can happen to). I would be quite confident that in interpersonal relationships people tend to value the ‘good’ if the community is even relatively healthy. I am talking about people and societies that act and strive for the good [1] while intellectually believing in moral relativism or something akin to that. Hence the performative contradiction. This is an internal contradiction that I believe stems from our rejection of traditional wisdom (in the intellectual but not in the performative level for now) and its result in an incoherent theory of being.
[1] Even propaganda basis its ideals to a (twisted) conception of good.
It’s a central notion in the computational metaethics that, not under that name, was tentatively sketched by Yudkowski in the metaethics sequence. Humans share, because of evolution, a kernel of values that works as a foundation of what we call “morality”. Morality is thus both objective and subjective: objective because these values are shared and encoded in the DNA (some are even mathematical equilibria, such as cooperation in IPD); subjective because since they are computation they exists only insofar our minds compute them, and outside of the common nucleus they can vary depending on the culture / life experiences / contingencies / etc.
Thanks you for pointing me to the articles. So much material!
subjective because since they are computation they exists only insofar our minds compute them
This is were I believe the rational analysis has gone wrong. When you say computation I understand it in one of two ways:
[1] Humans are consciously computing
[2] Humans are unconsciously computing
[1] This is clearly not the case as even today we are trying to find a computational basis for morality. But we already have advanced system of values so they have been created before this attempt of ours.
[2] That could be a possibility but I have not seen any evidence for such a statement (please point me to the evidence if they exist!). In contrast we have an insane amount of evidence for the evolution and transmission of values through stories.
So, values (I would propose) have not been computed at all, they have evolved.
To quote myself from my answer to Manfred below:
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
I think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
That could be a possibility but I have not seen any evidence for such a statement (please point me to the evidence if they exist!). In contrast we have an insane amount of evidence for the evolution and transmission of values through stories.
Computation, in this case, does not refer to mental calculation. It simply points out that our brain is elaborating informations to come up with an answer, whether it is in the form of stories or simply evolved stimulus-response. The two views are not in opposition, they simply point to a basic function of the brain, which is to elaborate information instead of, say, pumping blood or filtering toxins.
I see what you mean but I am not sure we are exactly on the same page. Let me try to break it down and you can correct me if I misunderstood.
It seems to me that you are thinking of computation as a process for “coming up with an answer” but I am talking about having no answers at all but acting out patterns of actions transmitted culturally. Even before verbal elaboration. This transmission of action patterns was first performed by rituals and rites as can be observed in primitive cultures. They were then elaborated as stories, myths, religion, drama, literature etc. and of course at some point became an element for manipulation by abstract thought.
So the difference with what you are saying is that you are assuming an ‘elaboration of information’ by the brain when on the level of ideas the elaboration happens culturally through the evolutionary process. The consequence is that the values have to be accepted (believed in) and then can be (maybe) experientially confirmed. This also explains the ‘ought from an is’ issue.
Maybe it’s because I’m coming from a computer science background, but I’m thinking of computation as much more basic than that. Whether you’re elaborating myths or reacting to the sight of a snake, your brain is performing calculations. I think we agree that our values are deeply ingrained, although it’s much more difficult to say exactly to what level(s). I do not agree that our values are selected through memetic adaptation, or at least that’s only part of the story.
I would be grateful if you can indulge my argument a bit further.
Maybe it’s because I’m coming from a computer science background, but I’m thinking of computation as much more basic than that.
I think I clumsily gave the impression that I deny such computation. I was referring to computations that generate value presuppositions. Of course the brain is computing in multiple levels, whether we are conscious of it or not. In addition there seems to be evidence, of what may be called, an emergent proto-morality in animals that, if true, is completely biologically determined. Things become more complex when we have to deal with higher, more elaborated, values.
I’ve read a bit through the meta ethics sequence and it seems to me to be an attempt to generate fundamental values through computation. If it was successful some kind of implementation would indicate it and/or some biological structure would be identified, so I would assume this is all speculative. I have to admit that I didn’t study the material in depth so please tell me if you have found that there are demonstrable results arising from it that I simply haven’t understood.
So to sum up:
Your view is that there is an objective morality that is shared and encoded in the DNA (parts of it are even mathematical equilibria, such as cooperation in IPD). They are also subjective because since they are computation they exists only insofar our minds compute them, and outside of the common nucleus they can vary depending on the culture / life experiences / contingencies / etc.
My view is that your proposition of a biological encoding may be correct up to a certain (basic) level but many values are transmitted through, to use your terminology, mimetic adaptation. These are objective in the sense that they approximate deeper objective principles that allow for survival and flourishing. Subjective ideas can be crafted on top of these values and these may prove beneficial or not.
I do not agree that our values are selected through memetic adaptation, or at least that’s only part of the story.
It seems to me that it is unquestionably part of the story. Play, as a built-in mimetic behaviour for transference of cultural schemas. Rituals and rites as part of all tribal societies. Stories as the means of transmiting values and as the basis of multiple (all?) civilisations including ours, so…
Am I missing something? What is the rational basis by which you choose to under emphasise the hypothesis regarding the cultural propagation through mimetic adaptation and stories?
I think this is totally consistent with relativism (Where I mean relativism in the sense of moral and aesthetic judgments being based on subjective personal taste and contingent learned behavior. Moral and aesthetic judgments still exist and have meaning.).
The fact that people make the same moral judgments most of the time is (I claim) because humans are in general really, really similar to each other. 200 years ago this would be mysterious and might be taken as evidence of moral truths external to any human mind, but now we can explain this similarity in terms of the origin of human values by evolution.
I am not sure the possibility of an objective basis is taken seriously enough.
Where I mean relativism in the sense of moral and aesthetic judgments being based on subjective personal taste and contingent learned behaviour. Moral and aesthetic judgements still exist and have meaning.
Yes but there is a spectrum of meaning. There is the ephemeral meaning of hedonistic pleasure or satiation (I want a doughnut). But we sacrifice shallow for deeper meaning; unrestricted sex for love, intimacy, trust and family. Our doughnut for health and better appearance. And then we create values that span wider spatial and temporal areas. For something to be meaningful it will have to matter (be a positive force) in an as much as possible wider spatial area as well as extend (as a positive force) into the future.
Moral relativism, if properly followed to its conclusion, equalises good and evil and renders the term ‘positive’ void. And then:
but now we can explain this similarity in terms of the origin of human values by evolution.
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
I think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
Human morals, human preferences, and human ability to work to satisfy those morals and preferences on large scales, are all quite successful from an evolutionary perspective, and make use of elements seen other places in the animal kingdom. There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
Let me give an analogy for why I think this doesn’t remove meaning from things (it will also be helpful if you’ve read the article Fake Reductionism from the archives). We like to drink water, and think it’s wet. Then we learn that water is made of molecules, which are made of atoms, etc, and in fact this idea of “water” is not fundamental within the laws of physics. Does this remove meaning from wetness, and from thirst?
There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
Does this remove meaning from wetness, and from thirst?
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old “ought from an is” issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an “ought from an is” without a value presupposition.
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin—would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
Would it be OK to enslave half of humanity...
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
When people say “what is right?, I always think of this as being like “by what standard would we act, if we could choose standards for ourselves?” rather than like “what does the external rightness-object say?”
We can think as if we’re consulting the rightness-object when working cooperatively with other humans—it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it’s not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality?
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words ‘understand’ and ‘value’ which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...
I find the majority of intellectually leaning people tend towards embracing moral relativism and aesthetic relativism. But even those people act morally and arrive at similar base aesthetic judgements. The pattern indeed seems (to me) to be that, in both morality and aesthetics, there are basic truths and then there is a huge amount of cultural and personal variation. But the existence of variation does not negate the foundational truths. Here are a couple of examples of how this performative contradiction is an indication that these foundational truths are at the very least believed in by humans no matter what they are saying:
People in general (including moral relativists) have a good conception of good and evil, believe in its existence and act for the good. There is also an intuition of why that is better which is related to concepts such as creation, destruction, harmony etc. and an underlying choice of moving towards creation and improvement.
I am a musician and have had extensive exposure to experimental and avant garde music inside academia. There is a kind of tendency in modern art to say that anything goes but I feel that this is hypocritical. I had discussions with people insisting on everything being subjective and how harmony does not really exist but I believe it is telling that they would never, ever put wrong (for lack of a better word) music on for their enjoyment.
Would love to hear your thoughts on that especially if you consider yourself a moral relativist.
Seems to me that most people understand the difference between good and evil, and most people prefer good to evil, but we have a fashion where good is considered low-status, so many people are ashamed to admit their preferences publicly.
It’s probably some mix of signalling and counter-signalling. On the signalling side, powerful people are often evil, or at least indifferent towards good and evil. By pretending that I don’t care about good, I am making myself appear more powerful. On the counter-signalling side, any (morally sane) idiot can say that good is better that evil; I display my sophistication by expressing a different opinion.
I do not think that is true. There are exceptions of course but in general most people would say that they prefer someone that is truthful to a lier, honest to deceitful etc. and also despise malevolence.
That is also not really true as far as I can tell. Again, there are exceptions, but the idea that powerful people are there because they oppressed the less powerful seems to be a residue of Marxist ideology. Apparently studies have found that in western societies successful people tend to be high in IQ and trait conscientiousness. This just means that people are powerful and successful because they are intelligent and hard working..
When you say understand you mean ‘believe in’ or ‘intuitively understand’ I assume? Cause rational assessment does not conclude so, as far as I can tell.
There’s some research that suggest that high socioeconomic status reduces compassion: https://www.scientificamerican.com/article/how-wealth-reduces-compassion/
I also added a skeptics question: https://skeptics.stackexchange.com/q/38802/196
Thanks for sharing. I must admit, I am not convinced by the methods of measurement of such complex mental states but I do not properly understand the science either so.. Do share the result from stackexchange if you get an answer (can’t find how to ‘watch’ the question).
The reality may be country-specific, or culture-specific. Whether more powerful people are more evil may be different in America, in Russia, in Saudi Arabia, etc.
And for status purposes, it’s actually the perception that matters. If people believe that X correlates with Y, even if it is not true, displaying X is the way to signal Y.
Yep, in “western societies”. I would say this could actually be the characteristic of “western societies”. By which I mean, for the rest of the world this sounds incredibly naive (or a shameless hypocrisy). I believe it’s actually true, statistically, for the record, but that came as a result of me interacting with people from western societies and noticing the cultural differences.
Also, notice the semantic shifts (“powerful” → “successful”; “good” → “high in IQ and trait conscientiousness”). Perhaps a typical entrepreneur is smart, conscientious, and good (or at least, not worse than an average citizen), that seems likely. What about a typical oligarch? You know, usually a former member of some secret service, who made his career on torturing innocent people, and who remains well connected after end of his active service, which probably means he still participates on some activities, most likely criminal. I would still say higher IQ and conscientiousness help here, but seems like a safe bet than most of these people are quite evil in the conventional meaning of the word.
Yes, you are right!
These are good points. And a very interesting observation about the semantic shifts. On further thought I would say that in a corrupt society the evil will be powerful while in a fair and good society the good. And of course in reality most cultures are a mixture. At the moment I believe it is impossible to be certain about what our (or any other) society is really like cause the interpretations are conflicted and the sources quality ambiguous. Plus intellectually we can not define the good in any absolute sense (though we kind of know its characteristics in some sense). In any case let’s avoid a political discussion, or even one of specific moral particulars for now since the point of the thread is more general.
One thing I would like to bring up is that, to me, it seems that it is not a matter of signalling to others (though that can happen to). I would be quite confident that in interpersonal relationships people tend to value the ‘good’ if the community is even relatively healthy. I am talking about people and societies that act and strive for the good [1] while intellectually believing in moral relativism or something akin to that. Hence the performative contradiction. This is an internal contradiction that I believe stems from our rejection of traditional wisdom (in the intellectual but not in the performative level for now) and its result in an incoherent theory of being.
[1] Even propaganda basis its ideals to a (twisted) conception of good.
It’s a central notion in the computational metaethics that, not under that name, was tentatively sketched by Yudkowski in the metaethics sequence.
Humans share, because of evolution, a kernel of values that works as a foundation of what we call “morality”. Morality is thus both objective and subjective: objective because these values are shared and encoded in the DNA (some are even mathematical equilibria, such as cooperation in IPD); subjective because since they are computation they exists only insofar our minds compute them, and outside of the common nucleus they can vary depending on the culture / life experiences / contingencies / etc.
Thanks you for pointing me to the articles. So much material!
This is were I believe the rational analysis has gone wrong. When you say computation I understand it in one of two ways:
[1] Humans are consciously computing
[2] Humans are unconsciously computing
[1] This is clearly not the case as even today we are trying to find a computational basis for morality. But we already have advanced system of values so they have been created before this attempt of ours.
[2] That could be a possibility but I have not seen any evidence for such a statement (please point me to the evidence if they exist!). In contrast we have an insane amount of evidence for the evolution and transmission of values through stories.
So, values (I would propose) have not been computed at all, they have evolved.
To quote myself from my answer to Manfred below:
Computation, in this case, does not refer to mental calculation. It simply points out that our brain is elaborating informations to come up with an answer, whether it is in the form of stories or simply evolved stimulus-response. The two views are not in opposition, they simply point to a basic function of the brain, which is to elaborate information instead of, say, pumping blood or filtering toxins.
I see what you mean but I am not sure we are exactly on the same page. Let me try to break it down and you can correct me if I misunderstood.
It seems to me that you are thinking of computation as a process for “coming up with an answer” but I am talking about having no answers at all but acting out patterns of actions transmitted culturally. Even before verbal elaboration. This transmission of action patterns was first performed by rituals and rites as can be observed in primitive cultures. They were then elaborated as stories, myths, religion, drama, literature etc. and of course at some point became an element for manipulation by abstract thought.
So the difference with what you are saying is that you are assuming an ‘elaboration of information’ by the brain when on the level of ideas the elaboration happens culturally through the evolutionary process. The consequence is that the values have to be accepted (believed in) and then can be (maybe) experientially confirmed. This also explains the ‘ought from an is’ issue.
Maybe it’s because I’m coming from a computer science background, but I’m thinking of computation as much more basic than that. Whether you’re elaborating myths or reacting to the sight of a snake, your brain is performing calculations.
I think we agree that our values are deeply ingrained, although it’s much more difficult to say exactly to what level(s). I do not agree that our values are selected through memetic adaptation, or at least that’s only part of the story.
I would be grateful if you can indulge my argument a bit further.
I think I clumsily gave the impression that I deny such computation. I was referring to computations that generate value presuppositions. Of course the brain is computing in multiple levels, whether we are conscious of it or not. In addition there seems to be evidence, of what may be called, an emergent proto-morality in animals that, if true, is completely biologically determined. Things become more complex when we have to deal with higher, more elaborated, values.
I’ve read a bit through the meta ethics sequence and it seems to me to be an attempt to generate fundamental values through computation. If it was successful some kind of implementation would indicate it and/or some biological structure would be identified, so I would assume this is all speculative. I have to admit that I didn’t study the material in depth so please tell me if you have found that there are demonstrable results arising from it that I simply haven’t understood.
So to sum up:
Your view is that there is an objective morality that is shared and encoded in the DNA (parts of it are even mathematical equilibria, such as cooperation in IPD). They are also subjective because since they are computation they exists only insofar our minds compute them, and outside of the common nucleus they can vary depending on the culture / life experiences / contingencies / etc.
My view is that your proposition of a biological encoding may be correct up to a certain (basic) level but many values are transmitted through, to use your terminology, mimetic adaptation. These are objective in the sense that they approximate deeper objective principles that allow for survival and flourishing. Subjective ideas can be crafted on top of these values and these may prove beneficial or not.
It seems to me that it is unquestionably part of the story. Play, as a built-in mimetic behaviour for transference of cultural schemas. Rituals and rites as part of all tribal societies. Stories as the means of transmiting values and as the basis of multiple (all?) civilisations including ours, so…
Am I missing something? What is the rational basis by which you choose to under emphasise the hypothesis regarding the cultural propagation through mimetic adaptation and stories?
I think this is totally consistent with relativism (Where I mean relativism in the sense of moral and aesthetic judgments being based on subjective personal taste and contingent learned behavior. Moral and aesthetic judgments still exist and have meaning.).
The fact that people make the same moral judgments most of the time is (I claim) because humans are in general really, really similar to each other. 200 years ago this would be mysterious and might be taken as evidence of moral truths external to any human mind, but now we can explain this similarity in terms of the origin of human values by evolution.
I am not sure the possibility of an objective basis is taken seriously enough.
Yes but there is a spectrum of meaning. There is the ephemeral meaning of hedonistic pleasure or satiation (I want a doughnut). But we sacrifice shallow for deeper meaning; unrestricted sex for love, intimacy, trust and family. Our doughnut for health and better appearance. And then we create values that span wider spatial and temporal areas. For something to be meaningful it will have to matter (be a positive force) in an as much as possible wider spatial area as well as extend (as a positive force) into the future.
Moral relativism, if properly followed to its conclusion, equalises good and evil and renders the term ‘positive’ void. And then:
Since we are considering evolution we can make the case that cultures evolved a morality that corresponds to certain ways of being that, though not objectively true, approximate deeper objective principles. An evolution of ideas.
I think the problem we are facing is that, since such principles were evolved, they are not discovered through rationality but through trying them out. The danger is that if we do not find rational evidence quickly (or more efficiently explore our traditions with humility) we might dispense with core ideas and have to wait for evolution to wipe the erroneous ideas out.
Human morals, human preferences, and human ability to work to satisfy those morals and preferences on large scales, are all quite successful from an evolutionary perspective, and make use of elements seen other places in the animal kingdom. There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
Let me give an analogy for why I think this doesn’t remove meaning from things (it will also be helpful if you’ve read the article Fake Reductionism from the archives). We like to drink water, and think it’s wet. Then we learn that water is made of molecules, which are made of atoms, etc, and in fact this idea of “water” is not fundamental within the laws of physics. Does this remove meaning from wetness, and from thirst?
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old “ought from an is” issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an “ought from an is” without a value presupposition.
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin—would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
When people say “what is right?, I always think of this as being like “by what standard would we act, if we could choose standards for ourselves?” rather than like “what does the external rightness-object say?”
We can think as if we’re consulting the rightness-object when working cooperatively with other humans—it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it’s not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words ‘understand’ and ‘value’ which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...