Multiple thoughts but these are the most relevant and immediate:
This entirely evades most discussion of morality in a way that is acknowledged will read to most moral philosophers as just insistently making a foundational level mistake. But there is some sense in which the type of foundations and reasoning about them proposed by most attempts at first principles reasoning just create something like Zeno’s paradox for morality, where it is just something that is not allowed to be in the world or touch the world but allegedly still has force and relevance. If you want moral philosophers to engage with empirical definitions of morality seriously you need to reductively develop the space where analytical philosophy and scientific epistemics meet, and then get assent to this from them. Which I suspect is difficult.
There still are a number of immediate prior dependencies for the specific type of evolution you propose, and these are therefore more fundamentally important than the process itself, “the process itself” is phenomenological or a way of describing distribution in approximately the same way as probability (ie, predictive in aggregates but not moments). But just as the number of faces on a die and number of dice rolled determine distribution and can be modified in advance, the environmental precursors to moral evolution can also be deliberately chosen.
I don’t have an enormous amount of experience at discussing meta-ethics with philosophers. As this post and the first and fourth posts in the sequence together attempt to spell out, I’m taking a very scientific/engineering approach to what has formerly generally been regarded as a philosophical question (specifically, I’m taking an approach combining those of a biologist, a psychologist, a game theorist, a sociologist, and an engineer-of-societies). The majority of the two-or-three actual trained philosophers that I have discussed this with at any length have acknowledged that Evolutionary Psychology does have some epistemic input into why human moral intuitions are what they are, and thus that the previous philosophers’ approach of treating moral intuitions as a somewhat mysterious and suspect but otherwise free-floating evidence-for-truth for moral statements needs to be updated.
If a philosopher didn’t acknowledge that (which is rather what I was anticipating when I wrote this sequence, back when my practical experience of interacting with philosophers consisted almost entirely of reading summaries of their views), then I agree, I doubt my meta-ethics are going to make much sense to them, and as you suggest, first developing the idea that they do would be necessary for us to have much of a conversation. In practice, I have been pleasantly surprised that this wasn’t necessary when I talked to some actual philosophers/students of philosophy with a deep interest in AI alignment — possibly a self-selecting group out of all philosophers.
None of this is helped by the fact that Evolutionary Psychology has an even worse case of the usual problem with evolutionary thought that, while not unfalsifiable so still a science, it is generally far easier to come up with a plausible evolutionary explanation than to actually test it – especially so when your experimental subjects would be entire societies of sapient beings – thus there are a lot of inadequately-tested evolutionary psychology hypotheses kicking around, some proportion of which are doubtlessly claptrap. (This is something that AI might actually help with, by making simulating interactions between sapient beings easier, modulo the rather interesting experimental ethics questions involved in that proposal.) Also, anything involving psychology or ethics tends to interact strongly with people’s political beliefs, so some of these untested speculations are, in my personal opinion, not just untested but also politically motivated (unlike my opinions in this area, which are of course completely unblemished by bias or wishful thinking… or perhaps not). The whole area of Sociobiology / Evolutionary Psychology rather fell out of fashion over this for a while, dating back to “social Darwinism” ideas being used to justify inequality back in the late 19th and early 20th century — attempts were made to tar Sociobiology with the same brush in the 1970s, and as I understand it that’s part of why the name was changed to Evolutionary Psychology in the 1980s and 1990s (along with some internal changes in emphasis). However, my point remains that even poor, somewhat questionable epistemic/scientific evidence about why human moral intuitions are what they are is a very different situation from having no epistemic evidence and human moral intuitions just being a free-floating observational fact. As an engineer-of-human-societies, human moral intuitions are obviously a vitally important constraint, so understanding to what degree parts of those are genetically inherent and/or cultural becomes extremely important, as I said in post one.
On your second point, I’m less clear where you’re going with that, but I think I agree with everything you point out. Evolutionary processes are both stochastic, and operating in a very complex loss function space that is extremely contextual on specific circumstances. So they are neither general nor entirely predictable, but by shaping their circumstances you can alter the loss function and thus (in an often rather difficult to foresee way) shape the outcome. On the other hand, some basic facts, such as tit-for-tat being a good strategy in certain categories on iterated non-zero-sum games, and the relation of that to our sense of justice, are rather predictable and likely to be rather universal across any evolved sapient species that live in mostly-not-closely related large groups.
One of the first authors I read independently rather than at the behest of school or someone else was Robert Ardrey. He was a playwrite, clustered in a small group of theorists who called themselves “ethologists”. Mostly what this meant was, people who insisted animal behavior was a template for plausible evopsych theories about human beings. I don’t consider him a serious author, but received continuous heavy handed pushback about him from the beginning of age 17 to the end of age 17. Subsequent critiques of evopsych, of animal consciousness and moral patiency, and of any assumed inductive continuity between organic and artificial intelligence, have always read to me the same way: technically correct, but using that technical correctness as a semantic stop sign, a barrier to disencentivize investigating a cluster of real things that a single wrong theory never had absolute claim to in the first place. This is, maybe naivety driven, but still insistently, a practice that I find contemptible in intelligent social groups. That the theory is wrong, but that it retains a valid property claim to the phenomenon it was intended to address, which become permanently orphaned from future conversation because the theory was wrong.
For what it’s worth, my interest in Evolutionary Psychology basically dates back to reading Richard Dawkins’ popularization The Selfish Gene as an impressionable teenager. Though I also read (and was less struck by) Konrad Lorenz and Desmond Morris, who I gather are Ethologists.
On the Ethologists, I’m not deeply familiar with them, but my basic viewpoint would be that yes, drives like aggression do obviously date back to before primates were social creatures living in larger most-not-closely related troupes, and comparing humans to other mammals or even birds is not pointless. But we spent at least 30 million years evolving in that specific ontext, during which our brains got a lot bigger and our behavior got a lot more complex. So analogies to animals that aren’t primates, and that don’t live in largish-not-mostly-related groups, are generally not very informative for quite a lot of our behavior. So much of our behavior is about playing iterated non-zero-sum games with members of the same tribe who are not close relatives with us: we are specialists in allying with non-kin, and it shows. Thus I think the later Sociobiologists and the later Evolutionary Psychologists were on firmer ground that the earlier Ethologists, so were likely less wrong. But the fact remains that many of the ideas of Evolutionary Psychology are reasonable-sounding evolutionary hypotheses with little or no neurological or genetic basis that have undergone relatively little experimental testing, and while this area of Biology is 50-odd years old, it’s still not that firmly established. It’s just the best source we current have.
I actually expect ASI AI-Assisted Alignment / Value Learning to put quite bit of effort into trying to put Evolutionary Psychology and related fields like the genetics of neurology on a better experimental basis — this seems rather important part of the Outer Alignment problem to me: to align AI to human values we need to figure out what humans actually value, and which parts of that are to what extent genetically determined vs culturally mutable. Which in turn leads to the problem I discussed in the previous post in this sequence, The Mutable Values Problem in Value Learning and CEV.
Multiple thoughts but these are the most relevant and immediate:
This entirely evades most discussion of morality in a way that is acknowledged will read to most moral philosophers as just insistently making a foundational level mistake. But there is some sense in which the type of foundations and reasoning about them proposed by most attempts at first principles reasoning just create something like Zeno’s paradox for morality, where it is just something that is not allowed to be in the world or touch the world but allegedly still has force and relevance. If you want moral philosophers to engage with empirical definitions of morality seriously you need to reductively develop the space where analytical philosophy and scientific epistemics meet, and then get assent to this from them. Which I suspect is difficult.
There still are a number of immediate prior dependencies for the specific type of evolution you propose, and these are therefore more fundamentally important than the process itself, “the process itself” is phenomenological or a way of describing distribution in approximately the same way as probability (ie, predictive in aggregates but not moments). But just as the number of faces on a die and number of dice rolled determine distribution and can be modified in advance, the environmental precursors to moral evolution can also be deliberately chosen.
I don’t have an enormous amount of experience at discussing meta-ethics with philosophers. As this post and the first and fourth posts in the sequence together attempt to spell out, I’m taking a very scientific/engineering approach to what has formerly generally been regarded as a philosophical question (specifically, I’m taking an approach combining those of a biologist, a psychologist, a game theorist, a sociologist, and an engineer-of-societies). The majority of the two-or-three actual trained philosophers that I have discussed this with at any length have acknowledged that Evolutionary Psychology does have some epistemic input into why human moral intuitions are what they are, and thus that the previous philosophers’ approach of treating moral intuitions as a somewhat mysterious and suspect but otherwise free-floating evidence-for-truth for moral statements needs to be updated.
If a philosopher didn’t acknowledge that (which is rather what I was anticipating when I wrote this sequence, back when my practical experience of interacting with philosophers consisted almost entirely of reading summaries of their views), then I agree, I doubt my meta-ethics are going to make much sense to them, and as you suggest, first developing the idea that they do would be necessary for us to have much of a conversation. In practice, I have been pleasantly surprised that this wasn’t necessary when I talked to some actual philosophers/students of philosophy with a deep interest in AI alignment — possibly a self-selecting group out of all philosophers.
None of this is helped by the fact that Evolutionary Psychology has an even worse case of the usual problem with evolutionary thought that, while not unfalsifiable so still a science, it is generally far easier to come up with a plausible evolutionary explanation than to actually test it – especially so when your experimental subjects would be entire societies of sapient beings – thus there are a lot of inadequately-tested evolutionary psychology hypotheses kicking around, some proportion of which are doubtlessly claptrap. (This is something that AI might actually help with, by making simulating interactions between sapient beings easier, modulo the rather interesting experimental ethics questions involved in that proposal.) Also, anything involving psychology or ethics tends to interact strongly with people’s political beliefs, so some of these untested speculations are, in my personal opinion, not just untested but also politically motivated (unlike my opinions in this area, which are of course completely unblemished by bias or wishful thinking… or perhaps not). The whole area of Sociobiology / Evolutionary Psychology rather fell out of fashion over this for a while, dating back to “social Darwinism” ideas being used to justify inequality back in the late 19th and early 20th century — attempts were made to tar Sociobiology with the same brush in the 1970s, and as I understand it that’s part of why the name was changed to Evolutionary Psychology in the 1980s and 1990s (along with some internal changes in emphasis). However, my point remains that even poor, somewhat questionable epistemic/scientific evidence about why human moral intuitions are what they are is a very different situation from having no epistemic evidence and human moral intuitions just being a free-floating observational fact. As an engineer-of-human-societies, human moral intuitions are obviously a vitally important constraint, so understanding to what degree parts of those are genetically inherent and/or cultural becomes extremely important, as I said in post one.
On your second point, I’m less clear where you’re going with that, but I think I agree with everything you point out. Evolutionary processes are both stochastic, and operating in a very complex loss function space that is extremely contextual on specific circumstances. So they are neither general nor entirely predictable, but by shaping their circumstances you can alter the loss function and thus (in an often rather difficult to foresee way) shape the outcome. On the other hand, some basic facts, such as tit-for-tat being a good strategy in certain categories on iterated non-zero-sum games, and the relation of that to our sense of justice, are rather predictable and likely to be rather universal across any evolved sapient species that live in mostly-not-closely related large groups.
One of the first authors I read independently rather than at the behest of school or someone else was Robert Ardrey. He was a playwrite, clustered in a small group of theorists who called themselves “ethologists”. Mostly what this meant was, people who insisted animal behavior was a template for plausible evopsych theories about human beings. I don’t consider him a serious author, but received continuous heavy handed pushback about him from the beginning of age 17 to the end of age 17. Subsequent critiques of evopsych, of animal consciousness and moral patiency, and of any assumed inductive continuity between organic and artificial intelligence, have always read to me the same way: technically correct, but using that technical correctness as a semantic stop sign, a barrier to disencentivize investigating a cluster of real things that a single wrong theory never had absolute claim to in the first place. This is, maybe naivety driven, but still insistently, a practice that I find contemptible in intelligent social groups. That the theory is wrong, but that it retains a valid property claim to the phenomenon it was intended to address, which become permanently orphaned from future conversation because the theory was wrong.
For what it’s worth, my interest in Evolutionary Psychology basically dates back to reading Richard Dawkins’ popularization The Selfish Gene as an impressionable teenager. Though I also read (and was less struck by) Konrad Lorenz and Desmond Morris, who I gather are Ethologists.
On the Ethologists, I’m not deeply familiar with them, but my basic viewpoint would be that yes, drives like aggression do obviously date back to before primates were social creatures living in larger most-not-closely related troupes, and comparing humans to other mammals or even birds is not pointless. But we spent at least 30 million years evolving in that specific ontext, during which our brains got a lot bigger and our behavior got a lot more complex. So analogies to animals that aren’t primates, and that don’t live in largish-not-mostly-related groups, are generally not very informative for quite a lot of our behavior. So much of our behavior is about playing iterated non-zero-sum games with members of the same tribe who are not close relatives with us: we are specialists in allying with non-kin, and it shows. Thus I think the later Sociobiologists and the later Evolutionary Psychologists were on firmer ground that the earlier Ethologists, so were likely less wrong. But the fact remains that many of the ideas of Evolutionary Psychology are reasonable-sounding evolutionary hypotheses with little or no neurological or genetic basis that have undergone relatively little experimental testing, and while this area of Biology is 50-odd years old, it’s still not that firmly established. It’s just the best source we current have.
I actually expect ASI AI-Assisted Alignment / Value Learning to put quite bit of effort into trying to put Evolutionary Psychology and related fields like the genetics of neurology on a better experimental basis — this seems rather important part of the Outer Alignment problem to me: to align AI to human values we need to figure out what humans actually value, and which parts of that are to what extent genetically determined vs culturally mutable. Which in turn leads to the problem I discussed in the previous post in this sequence, The Mutable Values Problem in Value Learning and CEV.