One of the first authors I read independently rather than at the behest of school or someone else was Robert Ardrey. He was a playwrite, clustered in a small group of theorists who called themselves “ethologists”. Mostly what this meant was, people who insisted animal behavior was a template for plausible evopsych theories about human beings. I don’t consider him a serious author, but received continuous heavy handed pushback about him from the beginning of age 17 to the end of age 17. Subsequent critiques of evopsych, of animal consciousness and moral patiency, and of any assumed inductive continuity between organic and artificial intelligence, have always read to me the same way: technically correct, but using that technical correctness as a semantic stop sign, a barrier to disencentivize investigating a cluster of real things that a single wrong theory never had absolute claim to in the first place. This is, maybe naivety driven, but still insistently, a practice that I find contemptible in intelligent social groups. That the theory is wrong, but that it retains a valid property claim to the phenomenon it was intended to address, which become permanently orphaned from future conversation because the theory was wrong.
For what it’s worth, my interest in Evolutionary Psychology basically dates back to reading Richard Dawkins’ popularization The Selfish Gene as an impressionable teenager. Though I also read (and was less struck by) Konrad Lorenz and Desmond Morris, who I gather are Ethologists.
On the Ethologists, I’m not deeply familiar with them, but my basic viewpoint would be that yes, drives like aggression do obviously date back to before primates were social creatures living in larger most-not-closely related troupes, and comparing humans to other mammals or even birds is not pointless. But we spent at least 30 million years evolving in that specific ontext, during which our brains got a lot bigger and our behavior got a lot more complex. So analogies to animals that aren’t primates, and that don’t live in largish-not-mostly-related groups, are generally not very informative for quite a lot of our behavior. So much of our behavior is about playing iterated non-zero-sum games with members of the same tribe who are not close relatives with us: we are specialists in allying with non-kin, and it shows. Thus I think the later Sociobiologists and the later Evolutionary Psychologists were on firmer ground that the earlier Ethologists, so were likely less wrong. But the fact remains that many of the ideas of Evolutionary Psychology are reasonable-sounding evolutionary hypotheses with little or no neurological or genetic basis that have undergone relatively little experimental testing, and while this area of Biology is 50-odd years old, it’s still not that firmly established. It’s just the best source we current have.
I actually expect ASI AI-Assisted Alignment / Value Learning to put quite bit of effort into trying to put Evolutionary Psychology and related fields like the genetics of neurology on a better experimental basis — this seems rather important part of the Outer Alignment problem to me: to align AI to human values we need to figure out what humans actually value, and which parts of that are to what extent genetically determined vs culturally mutable. Which in turn leads to the problem I discussed in the previous post in this sequence, The Mutable Values Problem in Value Learning and CEV.
One of the first authors I read independently rather than at the behest of school or someone else was Robert Ardrey. He was a playwrite, clustered in a small group of theorists who called themselves “ethologists”. Mostly what this meant was, people who insisted animal behavior was a template for plausible evopsych theories about human beings. I don’t consider him a serious author, but received continuous heavy handed pushback about him from the beginning of age 17 to the end of age 17. Subsequent critiques of evopsych, of animal consciousness and moral patiency, and of any assumed inductive continuity between organic and artificial intelligence, have always read to me the same way: technically correct, but using that technical correctness as a semantic stop sign, a barrier to disencentivize investigating a cluster of real things that a single wrong theory never had absolute claim to in the first place. This is, maybe naivety driven, but still insistently, a practice that I find contemptible in intelligent social groups. That the theory is wrong, but that it retains a valid property claim to the phenomenon it was intended to address, which become permanently orphaned from future conversation because the theory was wrong.
For what it’s worth, my interest in Evolutionary Psychology basically dates back to reading Richard Dawkins’ popularization The Selfish Gene as an impressionable teenager. Though I also read (and was less struck by) Konrad Lorenz and Desmond Morris, who I gather are Ethologists.
On the Ethologists, I’m not deeply familiar with them, but my basic viewpoint would be that yes, drives like aggression do obviously date back to before primates were social creatures living in larger most-not-closely related troupes, and comparing humans to other mammals or even birds is not pointless. But we spent at least 30 million years evolving in that specific ontext, during which our brains got a lot bigger and our behavior got a lot more complex. So analogies to animals that aren’t primates, and that don’t live in largish-not-mostly-related groups, are generally not very informative for quite a lot of our behavior. So much of our behavior is about playing iterated non-zero-sum games with members of the same tribe who are not close relatives with us: we are specialists in allying with non-kin, and it shows. Thus I think the later Sociobiologists and the later Evolutionary Psychologists were on firmer ground that the earlier Ethologists, so were likely less wrong. But the fact remains that many of the ideas of Evolutionary Psychology are reasonable-sounding evolutionary hypotheses with little or no neurological or genetic basis that have undergone relatively little experimental testing, and while this area of Biology is 50-odd years old, it’s still not that firmly established. It’s just the best source we current have.
I actually expect ASI AI-Assisted Alignment / Value Learning to put quite bit of effort into trying to put Evolutionary Psychology and related fields like the genetics of neurology on a better experimental basis — this seems rather important part of the Outer Alignment problem to me: to align AI to human values we need to figure out what humans actually value, and which parts of that are to what extent genetically determined vs culturally mutable. Which in turn leads to the problem I discussed in the previous post in this sequence, The Mutable Values Problem in Value Learning and CEV.