RSS

Evolu­tion­ary Psychology

TagLast edit: 27 Dec 2023 10:01 UTC by RogerDearnaley

Evolution, the cause of the diversity of biological life on Earth, does not work like humans do, and does not design things the way a human engineer would. This blind idiot god is also the source and patterner of human beings. “Nothing in biology makes sense except in the light of evolution,” said Theodosius Dobzhansky. Humans brains are also biology, and nothing about our thinking makes sense except in the light of evolution.

Consider, for example, the following tale:

A man and a woman meet in a bar. The man is attracted to her form and clear complexion, which would have been fertility cues in the ancestral environment, but which in this case result from makeup and a bra. This does not bother the man; he just likes the way she looks. His clear-complexion-detecting neural circuitry does not know that its purpose is to detect fertility, any more than the atoms in his hand contain tiny little XML tags reading “<purpose>pick things up</​purpose>”. The woman is attracted to his confident smile and firm manner, cues to high status, which in the ancestral environment would have signified the ability to provide resources for children. She plans to use birth control, but her confident-smile-detectors don’t know this any more than a toaster knows its designer intended it to make toast. She’s not concerned philosophically with the meaning of this rebellion, because her brain is a creationist and denies vehemently that evolution exists. He’s not concerned philosophically with the meaning of this rebellion, because he just wants to get laid. They go to a hotel, and undress. He puts on a condom, because he doesn’t want kids, just the dopamine-noradrenaline rush of sex, which reliably produced offspring 50,000 years ago when it was an invariant feature of the ancestral environment that condoms did not exist. They have sex, and shower, and go their separate ways. The main objective consequence is to keep the bar and the hotel and condom-manufacturer in business; which was not the cognitive purpose in their minds, and has virtually nothing to do with the key statistical regularities of reproduction 50,000 years ago which explain how they got the genes that built their brains that executed all this behavior.

This only makes sense in the light of evolution as a designer—that we are poorly optimized to reproduce by a blind and unforesightful god.

The idea of evolution as the idiot designer of humans—that our brains are not consistently well-designed—is a key element of many of the explanations of human errors that appear on this website.

Some of the key ideas of evolutionary psychology are these:

External links

See also

An Espe­cially Ele­gant Evpsych Experiment

Eliezer Yudkowsky13 Feb 2009 14:58 UTC
71 points
41 comments4 min readLW link

Cyn­i­cism in Ev-Psych (and Econ?)

Eliezer Yudkowsky11 Feb 2009 15:06 UTC
35 points
40 comments4 min readLW link

I learn bet­ter when I frame learn­ing as Vengeance for losses in­curred through ig­no­rance, and you might too

chaosmage15 Oct 2022 12:41 UTC
79 points
9 comments3 min readLW link1 review

Ge­netic fit­ness is a mea­sure of se­lec­tion strength, not the se­lec­tion target

Kaj_Sotala4 Nov 2023 19:02 UTC
55 points
43 comments18 min readLW link

The Psy­cholog­i­cal Diver­sity of Mankind

Kaj_Sotala9 May 2010 5:53 UTC
143 points
162 comments7 min readLW link

Evolu­tion­ary Psychology

Eliezer Yudkowsky11 Nov 2007 20:41 UTC
93 points
42 comments5 min readLW link

Prob­lems in evolu­tion­ary psy­chol­ogy

Kaj_Sotala13 Aug 2010 18:57 UTC
85 points
102 comments8 min readLW link

The Psy­cholog­i­cal Unity of Humankind

Eliezer Yudkowsky24 Jun 2008 7:12 UTC
57 points
23 comments4 min readLW link

Protein Re­in­force­ment and DNA Consequentialism

Eliezer Yudkowsky13 Nov 2007 1:34 UTC
60 points
20 comments4 min readLW link

Ba­bies and Bun­nies: A Cau­tion About Evo-Psych

Alicorn22 Feb 2010 1:53 UTC
81 points
843 comments2 min readLW link

Ra­tional vs. Scien­tific Ev-Psych

Eliezer Yudkowsky4 Jan 2008 7:01 UTC
34 points
49 comments3 min readLW link

Guilt: Another Gift No­body Wants

Scott Alexander31 Mar 2011 0:27 UTC
99 points
103 comments8 min readLW link

The Gift We Give To Tomorrow

Eliezer Yudkowsky17 Jul 2008 6:07 UTC
138 points
99 comments8 min readLW link

In­stru­men­tal vs. Epistemic—A Bardic Perspective

MBlume25 Apr 2009 7:41 UTC
89 points
189 comments3 min readLW link

Are most per­son­al­ity di­s­or­ders re­ally trust di­s­or­ders?

chaosmage6 Feb 2024 12:37 UTC
20 points
4 comments1 min readLW link

Could evolu­tion have se­lected for moral re­al­ism?

John_Maxwell27 Sep 2012 4:25 UTC
7 points
53 comments3 min readLW link

Would Your Real Prefer­ences Please Stand Up?

Scott Alexander8 Aug 2009 22:57 UTC
90 points
132 comments4 min readLW link

Alien Axiology

snerx20 Apr 2023 0:27 UTC
3 points
2 comments5 min readLW link

Ex­pect­ing Short In­fer­en­tial Distances

Eliezer Yudkowsky22 Oct 2007 23:42 UTC
338 points
106 comments3 min readLW link

At­ten­tion to snakes not fear of snakes: evolu­tion en­cod­ing en­vi­ron­men­tal knowl­edge in periph­eral systems

Kaj_Sotala2 Oct 2020 11:50 UTC
46 points
1 comment3 min readLW link
(kajsotala.fi)

The biolog­i­cal func­tion of love for non-kin is to gain the trust of peo­ple we can­not deceive

chaosmage7 Nov 2022 20:26 UTC
43 points
3 comments8 min readLW link

Ends Don’t Jus­tify Means (Among Hu­mans)

Eliezer Yudkowsky14 Oct 2008 21:00 UTC
190 points
97 comments4 min readLW link

Spec­u­la­tive Evopsych, Ep. 1

Optimization Process22 Nov 2018 19:00 UTC
41 points
9 comments1 min readLW link

The Mo­ral Coper­ni­can Principle

Legionnaire2 May 2023 3:25 UTC
5 points
7 comments2 min readLW link

De­tached Lever Fallacy

Eliezer Yudkowsky31 Jul 2008 18:57 UTC
80 points
42 comments7 min readLW link

Rea­son­ing isn’t about logic (it’s about ar­gu­ing)

Morendil14 Mar 2010 4:42 UTC
66 points
31 comments3 min readLW link

Trivers on Self-Deception

Scott Alexander12 Jul 2011 21:04 UTC
65 points
25 comments4 min readLW link

Why Sup­port the Un­der­dog?

Scott Alexander5 Apr 2009 0:01 UTC
42 points
102 comments3 min readLW link

Re­bel­ling Within Nature

Eliezer Yudkowsky13 Jul 2008 12:32 UTC
40 points
38 comments8 min readLW link

The Evolu­tion­ary-Cog­ni­tive Boundary

Eliezer Yudkowsky12 Feb 2009 16:44 UTC
50 points
29 comments3 min readLW link

Sym­pa­thetic Minds

Eliezer Yudkowsky19 Jan 2009 9:31 UTC
59 points
27 comments5 min readLW link

Minds: An Introduction

Rob Bensinger11 Mar 2015 19:00 UTC
47 points
2 comments6 min readLW link

Shittests are ac­tu­ally good

snog toddgrass24 Sep 2020 17:20 UTC
−11 points
23 comments2 min readLW link

A study on depression

vlad.proex13 Oct 2020 15:43 UTC
21 points
1 comment9 min readLW link

Fad­ing Novelty

lifelonglearner25 Jul 2018 21:36 UTC
21 points
2 comments6 min readLW link

Ac­cel­er­ate with­out hu­man­ity: Sum­mary of Nick Land’s philosophy

Yuxi_Liu16 Jun 2019 3:22 UTC
31 points
24 comments12 min readLW link

Eth­i­cal Inhibitions

Eliezer Yudkowsky19 Oct 2008 20:44 UTC
31 points
63 comments5 min readLW link

The Wire ver­sus Evolu­tion­ary Psychology

MrShaggy25 May 2009 5:21 UTC
18 points
19 comments1 min readLW link

Will value of paid sex drop right be­fore the end of the world?

azamatvaliev2 Sep 2023 19:03 UTC
−13 points
0 comments4 min readLW link

A The­ory of Laughter

Steven Byrnes23 Aug 2023 15:05 UTC
101 points
13 comments22 min readLW link

Thou Art Godshatter

Eliezer Yudkowsky13 Nov 2007 19:38 UTC
218 points
81 comments5 min readLW link

A Failed Just-So Story

Eliezer Yudkowsky5 Jan 2008 6:35 UTC
18 points
49 comments2 min readLW link

Com­pro­mis­ing with Compulsion

matejsuchy25 Feb 2021 16:43 UTC
4 points
1 comment8 min readLW link

Machines vs Memes Part 1: AI Align­ment and Memetics

Harriet Farlow31 May 2022 22:03 UTC
18 points
1 comment6 min readLW link

My take on Ja­cob Can­nell’s take on AGI safety

Steven Byrnes28 Nov 2022 14:01 UTC
71 points
15 comments30 min readLW link1 review

Break­ing the Op­ti­mizer’s Curse, and Con­se­quences for Ex­is­ten­tial Risks and Value Learning

Roger Dearnaley21 Feb 2023 9:05 UTC
10 points
1 comment23 min readLW link

Is That Your True Re­jec­tion? by Eliezer Yud­kowsky @ Cato Unbound

XiXiDu7 Sep 2011 18:27 UTC
44 points
83 comments1 min readLW link

5. Mo­ral Value for Sen­tient An­i­mals? Alas, Not Yet

RogerDearnaley27 Dec 2023 6:42 UTC
35 points
41 comments23 min readLW link

Mo­ti­vat­ing Align­ment of LLM-Pow­ered Agents: Easy for AGI, Hard for ASI?

RogerDearnaley11 Jan 2024 12:56 UTC
22 points
4 comments39 min readLW link

Good­bye, Shog­goth: The Stage, its An­i­ma­tron­ics, & the Pup­peteer – a New Metaphor

RogerDearnaley9 Jan 2024 20:42 UTC
46 points
8 comments36 min readLW link

Rep­u­ta­tional War­fare in the 21st Cen­tury

Declan Molony16 Jan 2024 6:57 UTC
−2 points
5 comments3 min readLW link

7. Evolu­tion and Ethics

RogerDearnaley15 Feb 2024 23:38 UTC
2 points
6 comments6 min readLW link

Align­ment has a Basin of At­trac­tion: Beyond the Orthog­o­nal­ity Thesis

RogerDearnaley1 Feb 2024 21:15 UTC
4 points
15 comments13 min readLW link

Re­quire­ments for a Basin of At­trac­tion to Alignment

RogerDearnaley14 Feb 2024 7:10 UTC
20 points
6 comments31 min readLW link

“Arc­tic In­stincts? The uni­ver­sal prin­ci­ples of Arc­tic psy­cholog­i­cal adap­ta­tion and the ori­gins of East Asian psy­chol­ogy”—Call for Re­view­ers (Seeds of Science)

rogersbacon16 Feb 2024 15:02 UTC
0 points
0 comments2 min readLW link

6. The Mutable Values Prob­lem in Value Learn­ing and CEV

RogerDearnaley4 Dec 2023 18:31 UTC
12 points
0 comments49 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo 29 Oct 2023 16:03 UTC
8 points
0 comments9 min readLW link

Align­ment—Path to AI as ally, not slave nor foe

ozb30 Mar 2023 14:54 UTC
10 points
3 comments2 min readLW link

How AGI will ac­tu­ally end us: Some pre­dic­tions on evolu­tion by ar­tifi­cial selection

James Carney10 Apr 2023 13:52 UTC
−11 points
1 comment13 min readLW link

Book Re­view: Oral­ity and Liter­acy: The Tech­nol­o­giz­ing of the Word

Fergus Fettes28 Oct 2023 20:12 UTC
13 points
0 comments16 min readLW link

GPT-2 XL’s ca­pac­ity for co­her­ence and on­tol­ogy clustering

MiguelDev30 Oct 2023 9:24 UTC
6 points
2 comments41 min readLW link

My Dat­ing Plan ala Ge­offrey Miller

snog toddgrass17 Jul 2020 4:52 UTC
2 points
57 comments3 min readLW link

He­donic asymmetries

paulfchristiano26 Jan 2020 2:10 UTC
98 points
22 comments2 min readLW link
(sideways-view.com)
No comments.