For what it’s worth, my interest in Evolutionary Psychology basically dates back to reading Richard Dawkins’ popularization The Selfish Gene as an impressionable teenager. Though I also read (and was less struck by) Konrad Lorenz and Desmond Morris, who I gather are Ethologists.
On the Ethologists, I’m not deeply familiar with them, but my basic viewpoint would be that yes, drives like aggression do obviously date back to before primates were social creatures living in larger most-not-closely related troupes, and comparing humans to other mammals or even birds is not pointless. But we spent at least 30 million years evolving in that specific ontext, during which our brains got a lot bigger and our behavior got a lot more complex. So analogies to animals that aren’t primates, and that don’t live in largish-not-mostly-related groups, are generally not very informative for quite a lot of our behavior. So much of our behavior is about playing iterated non-zero-sum games with members of the same tribe who are not close relatives with us: we are specialists in allying with non-kin, and it shows. Thus I think the later Sociobiologists and the later Evolutionary Psychologists were on firmer ground that the earlier Ethologists, so were likely less wrong. But the fact remains that many of the ideas of Evolutionary Psychology are reasonable-sounding evolutionary hypotheses with little or no neurological or genetic basis that have undergone relatively little experimental testing, and while this area of Biology is 50-odd years old, it’s still not that firmly established. It’s just the best source we current have.
I actually expect ASI AI-Assisted Alignment / Value Learning to put quite bit of effort into trying to put Evolutionary Psychology and related fields like the genetics of neurology on a better experimental basis — this seems rather important part of the Outer Alignment problem to me: to align AI to human values we need to figure out what humans actually value, and which parts of that are to what extent genetically determined vs culturally mutable. Which in turn leads to the problem I discussed in the previous post in this sequence, The Mutable Values Problem in Value Learning and CEV.
For what it’s worth, my interest in Evolutionary Psychology basically dates back to reading Richard Dawkins’ popularization The Selfish Gene as an impressionable teenager. Though I also read (and was less struck by) Konrad Lorenz and Desmond Morris, who I gather are Ethologists.
On the Ethologists, I’m not deeply familiar with them, but my basic viewpoint would be that yes, drives like aggression do obviously date back to before primates were social creatures living in larger most-not-closely related troupes, and comparing humans to other mammals or even birds is not pointless. But we spent at least 30 million years evolving in that specific ontext, during which our brains got a lot bigger and our behavior got a lot more complex. So analogies to animals that aren’t primates, and that don’t live in largish-not-mostly-related groups, are generally not very informative for quite a lot of our behavior. So much of our behavior is about playing iterated non-zero-sum games with members of the same tribe who are not close relatives with us: we are specialists in allying with non-kin, and it shows. Thus I think the later Sociobiologists and the later Evolutionary Psychologists were on firmer ground that the earlier Ethologists, so were likely less wrong. But the fact remains that many of the ideas of Evolutionary Psychology are reasonable-sounding evolutionary hypotheses with little or no neurological or genetic basis that have undergone relatively little experimental testing, and while this area of Biology is 50-odd years old, it’s still not that firmly established. It’s just the best source we current have.
I actually expect ASI AI-Assisted Alignment / Value Learning to put quite bit of effort into trying to put Evolutionary Psychology and related fields like the genetics of neurology on a better experimental basis — this seems rather important part of the Outer Alignment problem to me: to align AI to human values we need to figure out what humans actually value, and which parts of that are to what extent genetically determined vs culturally mutable. Which in turn leads to the problem I discussed in the previous post in this sequence, The Mutable Values Problem in Value Learning and CEV.