The Psychological Unity of Humankind
Followup to: Evolutions Are Stupid (But Work Anyway), Evolutionary Psychology
Biological organisms in general, and human brains particularly, contain complex adaptations; adaptations which involve many genes working in concert. Complex adaptations must evolve incrementally, gene by gene. If gene B depends on gene A to produce its effect, then gene A has to become nearly universal in the gene pool before there’s a substantial selection pressure in favor of gene B.
A fur coat isn’t an evolutionary advantage unless the environment reliably throws cold weather at you. And other genes are also part of the environment; they are the genetic environment. If gene B depends on gene A, then gene B isn’t a significant advantage unless gene A is reliably part of the genetic environment.
Let’s say that you have a complex adaptation with six interdependent parts, and that each of the six genes is independently at ten percent frequency in the population. The chance of assembling a whole working adaptation is literally a million to one; and the average fitness of the genes is tiny, and they will not increase in frequency.
In a sexually reproducing species, complex adaptations are necessarily universal.
One bird may have slightly smoother feathers than another, but they will both have wings. A single mutation can be possessed by some lucky members of a species, and not by others—but single mutations don’t correspond to the sort of complex, powerful machinery that underlies the potency of biology. By the time an adaptation gets to be really sophisticated with dozens of genes supporting its highly refined activity, every member of the species has some version of it—barring single mutations that knock out the whole complex.
So you can’t have the X-Men. You can’t have “mutants” running around with highly developed machinery that most of the human species doesn’t have. And no, extra-powerful radiation does not produce extra-potent mutations, that’s not how it works.
Again by the nature of sexual recombination, you’re very unlikely to see two complexly different adaptations competing in the gene pool. Two individual alleles may compete. But if you somehow had two different complex adaptations built out of many non-universal alleles, they would usually assemble in scrambled form.
So you can’t have New Humans and Old Humans either, contrary to certain science fiction books that I always found rather disturbing.
This is likewise the core truth of biology that justifies my claim that Einstein must have had very nearly the same brain design as a village idiot (presuming the village idiot does not have any actual knockouts). There is simply no room in reality for Einstein to be a Homo novis.
Maybe Einstein got really lucky and had a dozen not-too-uncommon kinds of smoother feathers on his wings, and they happened to work well together. And then only half the parts, on average, got passed on to each of his kids. So it goes.
“Natural selection, while feeding on variation, uses it up,” the saying goes. Natural selection takes place when you’ve got different alleles in the gene pool competing, but in a few hundred generations one allele wins, and you don’t have competition at that allele any more, unless a new mutation happens to come along.
And if new genes come along that depend on the now-universal gene, that will tend to lock it in place. If A rises to universality, and then B, C, and D come along that depend on A, any A’ mutation that would be an improvement on A in isolation, may break B, C, or D and lose the benefit of those genes. Genes on which other genes depend, tend to get frozen in place. Some human developmental genes, that control the action of many other genes during embryonic development, have identifiable analogues in fruit flies.
You might think of natural selection at any given time, as a thin froth of variation frantically churning above a deep, still pool of universality.
And all this which I have said, is also true of the complex adaptations making up the human brain.
This gives rise to a rule in evolutionary psychology called “the psychological unity of humankind”.
Donald E. Brown’s list of human universals is a list of psychological properties which are found so commonly that anthropologists don’t report them. If a newly discovered tribe turns out to have a sense of humor, tell stories, perform marriage rituals, make promises, keep secrets, and become sexually jealous… well, it doesn’t really seem worth reporting any more. You might record the specific tales they tell. But that they tell stories doesn’t seem any more surprising than their breathing oxygen.
In every known culture, humans seem to experience joy, sadness, fear, disgust, anger, and surprise. In every known culture, these emotions are indicated by the same facial expressions.
This may seem too natural to be worth mentioning, but try to take a step back and see it as a startling confirmation of evolutionary biology. You’ve got complex neural wiring that controls the facial muscles, and even more complex neural wiring that implements the emotions themselves. The facial expressions, at least, would seem to be somewhat arbitrary—not forced to be what they are by any obvious selection pressure. But no known human tribe has been reproductively isolated long enough to stop smiling.
When something is universal enough in our everyday lives, we take it for granted; we assume it without thought, without deliberation. We don’t ask whether it will be there—we just act as if it will be. When you enter a new room, do you check it for oxygen? When you meet another intelligent mind, do you ask whether it might not have an emotion of joy?
Let’s go back to biology for a moment. What if, somehow, you had two different adaptations which both only assembled on the presence, or alternatively the absence, of some particular developmental gene? Then the question becomes: Why would the developmental gene itself persist in a polymorphic state? Why wouldn’t the better adaptation win—rather than both adaptations persisting long enough to become complex?
So a species can have different males and females, but that’s only because neither the males or the females ever “win” and drive the alternative to extinction.
This creates the single allowed exception to the general rule about the psychological unity of humankind: you can postulate different emotional makeups for men and women in cases where there exist opposed selection pressures for the two sexes. Note, however, that in the absence of actually opposed selection pressures, the species as a whole will get dragged along even by selection pressure on a single sex. This is why males have nipples; it’s not a selective disadvantage.
I believe it was Larry Niven who suggested that the chief experience human beings have with alien intelligence is their encounters with the opposite sex.
This doesn’t seem to be nearly enough experience, judging by Hollywood scriptwriters who depict AIs that are ordinarily cool and collected and repressed, until they are put under sufficient stress that they get angry and show the corresponding standard facial expression.
No, the only really alien intelligence on this planet is natural selection, of which I have already spoken… for exactly this reason, that it gives you true experience of the Alien. Evolution knows no joy and no anger, and it has no facial expressions; yet it is nonetheless capable of creating complex machinery and complex strategies. It does not work like you do.
If you want a real alien to gawk at, look at the other Powerful Optimization Process.
This vision of the alien, conveys how alike humans truly are—what it means that everyone has a prefrontal cortex, everyone has a cerebellum, everyone has an amygdala, everyone has neurons that run at O(20Hz), everyone plans using abstractions.
Having been born of sexuality, we must all be very nearly clones.
- Generalizing From One Example by 28 Apr 2009 22:00 UTC; 437 points) (
- The Psychological Diversity of Mankind by 9 May 2010 5:53 UTC; 142 points) (
- Two explanations for variation in human abilities by 25 Oct 2019 22:06 UTC; 87 points) (
- Humans in Funny Suits by 30 Jul 2008 23:54 UTC; 83 points) (
- The Sacred Mundane by 25 Mar 2009 9:53 UTC; 73 points) (
- Trivers on Self-Deception by 12 Jul 2011 21:04 UTC; 66 points) (
- Why so much variance in human intelligence? by 22 Aug 2019 22:36 UTC; 65 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- The Design Space of Minds-In-General by 25 Jun 2008 6:37 UTC; 44 points) (
- Blood Is Thicker Than Water 🐬 by 28 Sep 2021 3:21 UTC; 37 points) (
- Mental Crystallography by 27 Feb 2010 1:04 UTC; 33 points) (
- Ethical Inhibitions by 19 Oct 2008 20:44 UTC; 31 points) (
- Interpersonal Morality by 29 Jul 2008 18:01 UTC; 28 points) (
- Recognizing Intelligence by 7 Nov 2008 23:22 UTC; 28 points) (
- An African Folktale by 16 Feb 2009 1:00 UTC; 28 points) (
- Does Your Morality Care What You Think? by 26 Jul 2008 0:25 UTC; 21 points) (
- CEV: coherence versus extrapolation by 22 Sep 2014 11:24 UTC; 21 points) (
- 30 Oct 2010 19:21 UTC; 18 points) 's comment on Value Deathism by (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 17 points) (
- Intelligence in Economics by 30 Oct 2008 21:17 UTC; 14 points) (
- The rationality of splitting donations by 10 Feb 2014 3:13 UTC; 14 points) (
- 27 May 2009 16:35 UTC; 8 points) 's comment on Dissenting Views by (
- 5 Jan 2010 19:43 UTC; 7 points) 's comment on Open Thread: January 2010 by (
- 1 Feb 2011 20:38 UTC; 6 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- Generalizing From One Example & Evolutionary Game Theory by 31 May 2011 23:23 UTC; 6 points) (
- 30 May 2011 15:11 UTC; 6 points) 's comment on Three Worlds Collide (0/8) by (
- 30 Oct 2013 3:08 UTC; 6 points) 's comment on Why didn’t people (apparently?) understand the metaethics sequence? by (
- [SEQ RERUN] The Psychological Unity of Humankind by 13 Jun 2012 4:47 UTC; 6 points) (
- 11 Aug 2008 13:39 UTC; 5 points) 's comment on Moral Error and Moral Disagreement by (
- 21 Mar 2010 17:51 UTC; 5 points) 's comment on The scourge of perverse-mindedness by (
- 6 Dec 2012 22:07 UTC; 5 points) 's comment on How to Avoid the Conflict Between Feminism and Evolutionary Psychology? by (
- 24 Feb 2023 1:06 UTC; 4 points) 's comment on Hello, Elua. by (
- 17 Nov 2024 15:55 UTC; 4 points) 's comment on Why would ASI share any resources with us? by (
- 13 Feb 2013 6:29 UTC; 4 points) 's comment on Questions for Moral Realists by (
- 15 Oct 2009 2:53 UTC; 4 points) 's comment on Anticipation vs. Faith: At What Cost Rationality? by (
- 22 Apr 2012 1:23 UTC; 4 points) 's comment on Stupid Questions Open Thread Round 2 by (
- 2 Feb 2011 0:41 UTC; 3 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 20 May 2013 8:02 UTC; 3 points) 's comment on Open thread, May 17-31 2013 by (
- 5 Dec 2012 15:19 UTC; 3 points) 's comment on Train Philosophers with Pearl and Kahneman, not Plato and Kant by (
- 26 Jan 2013 20:26 UTC; 3 points) 's comment on CEV: a utilitarian critique by (
- 27 Mar 2015 10:08 UTC; 3 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 28 Jan 2014 16:39 UTC; 3 points) 's comment on 2013 Survey Results by (
- 22 Feb 2013 2:47 UTC; 3 points) 's comment on Why Politics are Important to Less Wrong... by (
- 27 Feb 2014 13:03 UTC; 3 points) 's comment on Is love a good idea? by (
- 17 Nov 2024 15:52 UTC; 2 points) 's comment on Why would ASI share any resources with us? by (
- 29 Apr 2009 6:50 UTC; 2 points) 's comment on Wednesday depends on us. by (
- 29 May 2013 2:54 UTC; 2 points) 's comment on Requesting advice: Doing Epistemology Right (Warning: Abstract mainstream Philosophy herein) by (
- 19 Mar 2023 10:45 UTC; 2 points) 's comment on Is AI Safety dropping the ball on privacy? by (
- 8 Mar 2011 22:55 UTC; 2 points) 's comment on Procedural Knowledge Gaps by (
- 27 May 2009 9:31 UTC; 1 point) 's comment on Dissenting Views by (
- 5 Dec 2012 18:51 UTC; 1 point) 's comment on Is Equality Really about Diminishing Marginal Utility? by (
- 23 Oct 2013 18:19 UTC; 1 point) 's comment on Competent Elites by (
- 26 Jun 2013 19:15 UTC; 1 point) 's comment on Fake Selfishness by (
- 20 Dec 2015 22:50 UTC; 1 point) 's comment on Open thread, Dec. 14 - Dec. 20, 2015 by (
- 24 Dec 2010 6:49 UTC; 1 point) 's comment on Two questions about CEV that worry me by (
- 16 Sep 2015 19:35 UTC; 1 point) 's comment on Disability Culture Meets the Transhumanist Condition by (
- 1 Jun 2009 12:34 UTC; 0 points) 's comment on This Failing Earth by (
- 26 Aug 2014 12:50 UTC; 0 points) 's comment on Open thread, 11-17 August 2014 by (
- 24 Feb 2023 2:25 UTC; 0 points) 's comment on Hello, Elua. by (
- 22 Aug 2011 10:33 UTC; 0 points) 's comment on A Sketch of an Anti-Realist Metaethics by (
- 11 Aug 2008 13:27 UTC; 0 points) 's comment on Moral Error and Moral Disagreement by (
- 18 Apr 2009 5:43 UTC; 0 points) 's comment on The Trouble With “Good” by (
- 10 Jan 2014 20:13 UTC; 0 points) 's comment on The genie knows, but doesn’t care by (
- 31 Oct 2014 10:14 UTC; 0 points) 's comment on A discussion of heroic responsibility by (
- 9 Sep 2010 20:06 UTC; 0 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 29 Jul 2012 15:11 UTC; 0 points) 's comment on [SEQ RERUN] Moral Error and Moral Disagreement by (
- 17 Jul 2015 3:06 UTC; 0 points) 's comment on Beware the Nihilistic Failure Mode by (
- 31 Jan 2014 0:50 UTC; 0 points) 's comment on 2013 Survey Results by (
- 11 May 2012 5:05 UTC; 0 points) 's comment on Alan Carter on the Complexity of Value by (
- 12 Dec 2012 9:59 UTC; -1 points) 's comment on Giving What We Can, 80,000 Hours, and Meta-Charity by (
- 15 Feb 2021 22:52 UTC; -1 points) 's comment on “PR” is corrosive; “reputation” is not. by (
- 13 Jun 2013 19:51 UTC; -1 points) 's comment on Effective Altruism Through Advertising Vegetarianism? by (
- 29 Sep 2012 20:33 UTC; -1 points) 's comment on [SEQ RERUN] Ethical Inhibitions by (
- 11 Jul 2012 0:09 UTC; -2 points) 's comment on Less Wrong views on morality? by (
- 13 Sep 2012 3:10 UTC; -3 points) 's comment on Under-acknowledged Value Differences by (
- 21 Jan 2013 16:11 UTC; -4 points) 's comment on Rationality Quotes January 2013 by (
- 2 Aug 2011 12:44 UTC; -8 points) 's comment on On the unpopularity of cryonics: life sucks, but at least then you die by (
Given that we’re heading towards morality and the Singularity, I imagine this is here as a dependency for a future post arguing that there is a more-or-less objective notion of morality/humaneness/Friendliness for humans. However, even though all humans are virtually alike compared to the vastness of possible mind design space, there is still, from a human perspective, an enormous amount of variation amongst people. The things that everyone has in common are ipso facto irrelevant: it would border on contradiction to tell a human that her moral stance is wrong because it is contrary to human nature, for if it really were, a human couldn’t hold it.
Something to address in a future post, perhaps.
Is this true of ideas/memes also? Because there are a lot of wacky groups out there, more than just froth on the surface.
This brings up the Sapir Worf hypothesis, or the newspeak for it, “Linguistic Relativity”. After all, memes must be expressible, musn’t they? If they are then if it were true, then the memes that you have bound the memes that you can espouse—linguistic relativity in a nutshell.
Many memes these days come in picture form, but for that you need a medium capable of showing pictures, and the culture that places value in making such media universally available. Without that culture, and without the apparatus to share picture-memes those memes would quickly die out, though some abstract notion of some of them, perhaps carried along the linguistic pathway in the same way that even though we don’t use floppy disks for anything everyone uses them to ‘save’. So in a sense not only is it the media and its memes that has to be prior to memes expressed via it, but also the language memes have to be there in order for them to be used. Language might as well just be thought as the structure and set of of meme universals.
Looks like there’s been activity on Wikipedia since I’ve dug up this issue last suggesting that at least since the 1980′s there’s been recent research on how language, and memes influence thought/future use of memes/language. Reddit in particular has some really good data on this that they last I heard were not sharing with the world.
The big question is if memes are different, which evidence suggests, why is this so?
Z. M. Davis, that’s not what this is a dependency for.
Ian, this principle does not hold for human artifacts and human memes; we can make large jumps through design space with many coordinated simultaneous changes.
Ah yes, you covered that in Optimization and the Singularity.
eliezer,
hm. there’s a lot to this...but let’s just say that when it comes to psychology the residual of non-universality seems really important to focus on in the area of individual differences. specifically, it seems entirely possible that frequency dependent personality morphs abound; e.g., all the stuff about drd4 that i’ve been posting on my weblog. of course, this is single-locus, but i think it’s just the locus of biggest effect. i suspect that what evolutionary psychology misses is that after you account for the substrate of human universals you’ve got personality morphs which are playing “games” with each other, and in particular there are a host of low frequency morphs running around. these morphs are “complex” to my way of thinking, though perhaps not complex in the way you’re implying (i know the evo psych argument against a lot of variation on traits). i happen to think epistasis might also be pretty significant in the transient.
second, i’m also getting curious about variation in traits we perceive as on-off, where those who are “off” are purely pathological. e.g., it turns out that 2% of the population might be “face blind” but have been cryptic because they develop techniques to mask this problem and don’t talk about it. i don’t think it is just a 2% vs. 98%, i think there are a few other steps “in between”, though there’s a skewness toward the “normal” facial recognition ability side. on the basis of this i’m willing to dig a little deeper into “human universals” to see what might crop up on the margins.
You can have real X-Men, check out a discovery special about “real superhumans”. There was one guy who could withstand cold so well that the doctors thought it shouldn’t be possible. A single mutation sometimes does create significant changes (and in this case advantages).
http://www.discoveryhd.ca/shows/castdetails.aspx?cid=4619&sid=4608
razib, great post. Overall, if there’s a particular weakness among enthusiasts (and “experts”) in rational thinking, it’s a tendency towards overreductionism. Way to often the discussion here goes from Most X are Y to let’s discuss why X is Y. It can be revealing to analyze the subset of X that aren’t Y, as well as the overall distribution of X according to how much they are or aren’t Y. But these other analyses are done or discussed much here at OB, or at most similar sites.
In general, I definitely agree with this post. Though I put little trust in Donald E Brown, my concerns aren’t relevant here. Slightly relevant is the fact that it seems conceivable that the complex functional adaptations of humankind are so numerous and so complex that we exist at an equilibrium where more critical adaptations appear in almost everyone but significant adaptations exist which are less critical. Genes necessary to those adaptations might break regularly and not be weeded out by selection quickly. If such adaptations are numerous, most people might be missing several. Other adaptations might require, as vision does, environmental conditions that are reliably present in the ancestral environment but which may not be so reliably present today in order to emerge.
Lessons:
1) A situation with AIs whose intelligence is between village idiot and Einstein—assuming there is a scale to make “between” a non-poetic concept—is not very likely and probably short-lived if it does occur (unless perhaps it is engineered that way on purpose).
2) Aspects of human cognition—our particular emotions, our language forms, perhaps even pervasive mental tricks like reasoning by analogy—may be irrelevant to Optimization Processes in general, making their focus for AI research possibly “voodoo doll” methodology. AI may only deal with such things as part of communicating with humans, though mastering them well enough to participate effectively in human culture may be as difficult as inventing new technologies.
3) Optimization Processes built by Intelligent Designers can develop in ways that those built by evolution cannot because of multiple coordinated changes (this point has been beaten to death by now I think).
4) Sex is interesting.
For once, I have no complaints. I assume the path is being cleared for a discussion of what actually IS required for an optimization process to do what we need it to do (model the world, improve itself, etc), which seems only marginally related to what our brains do. If that’s where this is headed, I’m looking forward to it.
I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don’t notice them unless they give the wrong result (just like optical illusions). In short, I think any guided optimization process will resemble human intelligence in some ways (don’t know which ones), for reasons that I explained in my response to the last post.
Which actually makes me think of something interesting: possibly, there is no optimal guided search strategy. The reason why humans appear to succeed at it is because there are many of us thinking about the same thing at any given time, and each of us has a slightly differently tuned algorithm. So, one of is likely to end up converging on the solution even though nobody has an algorithm that can find every solution. And people self-select for types of problems that they’re good at.
The theme of this post isn’t very accurate, as large phenotypic polymorphisms in various other species demonstrate—e.g.:
Where did the reasoning go off the rails?
It isn’t just the case of males and females where different alleles can form a truce. There’s the whole phenomenon of frequent-dependent selection. Most people are familiar with this from blood types, and sickle-cell anaemia. Alleles with phenotypic effects involving disease resistance can be advantageous when rare and disadvantageous when common—resulting in them never going near extinction or fixation.
Also, this premise is inaccurate:
Gene B can spread if gene A is present at a frequency of 20% in the population—provided it is not deleterious in the absence of gene A. Sure, then the selection pressure maintaining it is reduced by a factor of five, but that’s not necessarily enough to kill it off.
Finally, phenotypic variation does not necessarily depend on genetic variation. There’s also the influence of the environment to consider. In general, the environment is quite capable of sending some organisms down different developmental paths depending on the circumstances in which they find themselves. This is known as phenotypic plasticity.
There are plenty of examples of phenotypic plasticity in humans—e.g. the effect is an important part of the reason why a Sumo wrestler and a racing jockey have different phenotypes.
That sounds like exactly the kind of situation Eliezer claims as the exception—the adaptation is present in the entire population, but only expressed in a subset based on the environmental conditions during development, because there’s a specific advantage to polymorphism.
Those are single genes, not complex adaptations consisting of multiple mutually-dependant genes. Exactly the “froth” he describes.
Upvote: polymorphism doesn’t indicate an absence of complex genes in part of a species. Consider that a uterus is a complex adaptation, and that my male body does contain a set of genes for building a uterus. The genes may be switched off or repurposed in my body, but they still exist, and are presumably reactivated in my daughter (in combination with some genes from my wife).
Not sure why Tyler speaks as if a pseudo-three-sexed species offers new and different evidence we don’t get from our two-sexed species.
P.S. don’t females lack the Y chromosome though? My impression is that this is related to degradation of that chromosome, which makes it less important over the eons, so that maybe someday (if nature were to take its course) its only purpose will be to act as a signal of maleness that affects gene expression on other chomosomes.
All the “3rd sex” I can think of are this : males in female form, for direct reproduction advantage.
Not a big departure from 2 sexes.
Eusocial insects might be more interesting.
I argue that the idea that complex adaptations are necessarily universal is fundamentally not correct—in my “Species Unity” essay:
http://alife.co.uk/essays/species_unity/
A mixed-strategy equilibrium?
The supposed psychological unity of human kind seems to be part of the “CEV” idea—since it suggests the intersection of human desires might be large.
However, if humans frequently want conflicting things—due to them all wanting to selfishly promote their own ends—the intersection of all human desires seems as though it would be smaller and of less interest.
A funny example of how two different complex adaptations can stably coexist within the same sex is Sepia apama, although this does involve cross-dressing. (I learned this from the BBC Life TV series, which is just amazing. I strongly recommend it to everybody.)
I’ve got to disagree with this one. Let’s take a concrete example, say pity. The ability to feel pity is a complex adaptation, and so all persons feel pity. However HOW MUCH any one person feels pity for others is a highly variable quantity. It varies dramatically from person to person, and from situation to situation; moreover, some people, (eg psycopaths) don’t feel any pity at all: their pity mechanism is broken, defective. Therefore the only conclusion you can draw from the alleged “Psychological unity of humankind” is that they will feel some unknown amount of pity in a certain situation, unless of course, they feel none at all.
The possible scale for pity ranges from 0 to some (unknown) maximum. Alleging the “psychological unity of humankind” gives you no additional information.
You can make the same null statement about every other human psychological trait, from conscientiousness to competitiveness.
You can’t even state ” ‘Smorgraph’ does not exist, since I have never felt it” because of course, your smorgraph generator or detector may be defective.
Lastly, since human beings were, until a couple of hundred years ago, in relatively isolated breeding pools, with fairly limited transfer of genetic materials between pools. This is still the case, although to a lesser degree. (compare the amount of gene transfer between Lima, Peru and Ontario, Canada vs the amount of gene transfer within the city of Toronto). It is highly likely that differential evolutionary pressure drove evolution in different direction in different sub-populations. The most famous examples of these are physical adaptations, like sickle cell anemia, an adaptation against malaria, and the ability to digest milk sugar, etc, but the same kind of evolutionary pressure no doubt also drove expression levels of psychological adaptations. Psychological mechanisms are obviously heritable and shaped by evolution. ie, in some ancestral environments, no doubt it was MORE ADVANTAGEOUS to be highly loyal, or to be very calm and dispassionate, or to be extremely vengeful and quick to anger, or to be deeply concerned about the well-being of your children, or to be very lustful, or extremely conscientious. These are adaptations, and your evolutionary environment is going to slide them up or down, based on differential survival rates.
Thus, it is HIGHLY PROBABLE that people in one ancestral sub-population are far more like EACH OTHER psychologically that they are psychologically like other human beings. The examples of this, which are legion, are commonly called ‘prejudice’. Prior probability would probably be a more scientific term.
‘The psychological unity of mankind’ is nothing more than a fairy story, something good hearted (or weak headed) people WANT to believe, because they are afraid that people ‘being different’ will lead to pogroms and lynching and blind discrimination. Well, probably it does. But, it’s still TRUE.
I’ve looked for that link before, and couldn’t find it. It’s closely related to Moral Foundations Theory, which is basically 6 categories for features of morality which are found in every culture.
Pretty sobering list, thanks.
My thought for a minor caveat: there may be one or two complex mental adaptations that are not universal.
My idea here is that something changed to trigger the transition from agrarian society to industrial society. For example, maybe there’s a new brain innovation related to science, engineering, language, math, logical thinking, or epistemology.
Or not. This change could also have been driven simply by cultural memes of science and invention, after the final piece of complexity was already fixed in the population, so that if a group of children from 10,000 years ago were transplanted into today’s world, they would be just as inclined to become scientists, engineers or philosophers as modern children. On the other hand, it could also be that these children would be disproportionately inclined to become engineers rather than scientists, or vice versa. Or maybe the ancient children would have fewer independent thinkers. Or would have worse language or math talent.
I see no way of testing this hypothesis though.