TagLast edit: 2 Oct 2020 18:31 UTC by Ruby

Evolution is “change in the heritable characteristics of biological populations over successive generations” (Wikipedia). For posts about machine learning look here.

Related: Biology, Evolutionary Psychology,

The sequence, The Simple Math of Evolution provides a good introduction to LessWrong thinking about evolution.

Why be interested in evolution?

Firstly, evolution is a useful case study of humans’ ability (or inability) to model the real world. This is because it has a single clear criterion (“relative reproductive fitness”) which is selected (optimized) for:

“If we can’t see clearly the result of a single monotone optimization criterion—if we can’t even train ourselves to hear a single pure note—then how will we listen to an orchestra? How will we see that “Always be selfish” or “Always obey the government” are poor guiding principles for human beings to adopt—if we think that even optimizing genes for inclusive fitness will yield organisms which sacrifice reproductive opportunities in the name of social resource conservation?

To train ourselves to see clearly, we need simple practice cases”—Eliezer Yudkowsky, Fake Optimisation Criteria

Secondly, much of rationality necessarily revolves around the human brain (for now). An understanding of how it came into being can be very helpful both for understanding ‘bugs’ in the system (like superstimuli), and for explaining Complexity of Value, among others.

A candy bar is a superstimulus: it contains more concentrated sugar, salt, and fat than anything that exists in the ancestral environment. A candy bar matches taste buds that evolved in a hunter-gatherer environment, but it matches those taste buds much more strongly than anything that actually existed in the hunter-gatherer environment. The signal that once reliably correlated to healthy food has been hijacked, blotted out with a point in tastespace that wasn’t in the training dataset—an impossibly distant outlier on the old ancestral graphs.
-- Eliezer Yudkowsky, Superstimuli and the Collapse of Western Civilisation

See also

External links

Summaries of Sequence’s Posts on Evolution

The following are summaries of posts concerning evolution in the Eliezer’s sequences:

Adap­ta­tion-Ex­e­cuters, not Fit­ness-Maximizers

Eliezer Yudkowsky11 Nov 2007 6:39 UTC
127 points
33 comments3 min readLW link

An Alien God

Eliezer Yudkowsky2 Nov 2007 6:57 UTC
172 points
165 comments8 min readLW link

Prob­lems in evolu­tion­ary psy­chol­ogy

Kaj_Sotala13 Aug 2010 18:57 UTC
85 points
102 comments8 min readLW link

The Tragedy of Group Selectionism

Eliezer Yudkowsky7 Nov 2007 7:47 UTC
98 points
89 comments5 min readLW link

Evolv­ing to Extinction

Eliezer Yudkowsky16 Nov 2007 7:18 UTC
109 points
44 comments6 min readLW link

Evolu­tion of Modularity

johnswentworth14 Nov 2019 6:49 UTC
161 points
12 comments2 min readLW link1 review

Con­jur­ing An Evolu­tion To Serve You

Eliezer Yudkowsky19 Nov 2007 5:55 UTC
69 points
27 comments4 min readLW link

Stud­ies On Slack

Scott Alexander13 May 2020 5:00 UTC
145 points
34 comments24 min readLW link1 review

The in­no­cent gene

Joe Carlsmith5 Apr 2021 3:31 UTC
38 points
3 comments9 min readLW link

Analo­gies and Gen­eral Pri­ors on Intelligence

20 Aug 2021 21:03 UTC
57 points
12 comments14 min readLW link

A com­mon mis­con­cep­tion about the evolu­tion of viruses

mukashi6 Jan 2022 10:00 UTC
7 points
15 comments1 min readLW link

Evolu­tions Are Stupid (But Work Any­way)

Eliezer Yudkowsky3 Nov 2007 15:45 UTC
85 points
68 comments4 min readLW link

Evolu­tion­ary Psychology

Eliezer Yudkowsky11 Nov 2007 20:41 UTC
88 points
40 comments5 min readLW link

Be­ware of Stephen J. Gould

Eliezer Yudkowsky6 Nov 2007 5:22 UTC
53 points
80 comments6 min readLW link

Protein Re­in­force­ment and DNA Consequentialism

Eliezer Yudkowsky13 Nov 2007 1:34 UTC
57 points
20 comments4 min readLW link

There’s no such thing as a tree (phy­lo­ge­net­i­cally)

eukaryote3 May 2021 3:47 UTC
319 points
52 comments7 min readLW link2 reviews

Con­di­tions for math­e­mat­i­cal equiv­alence of Stochas­tic Gra­di­ent Des­cent and Nat­u­ral Selection

Oliver Sourbut9 May 2022 21:38 UTC
56 points
12 comments10 min readLW link

Su­per­stim­uli and the Col­lapse of Western Civilization

Eliezer Yudkowsky16 Mar 2007 18:10 UTC
116 points
89 comments4 min readLW link

Co­or­di­na­tion Prob­lems in Evolu­tion: Ei­gen’s Paradox

Martin Sustrik12 Oct 2018 12:40 UTC
102 points
6 comments8 min readLW link

Com­puter bugs and evolution

PhilGoetz26 Oct 2009 22:06 UTC
55 points
10 comments1 min readLW link

No Evolu­tions for Cor­po­ra­tions or Nanodevices

Eliezer Yudkowsky17 Nov 2007 2:24 UTC
73 points
32 comments6 min readLW link

Co­or­di­na­tion Prob­lems in Evolu­tion: The Rise of Eukaryotes

Martin Sustrik15 Oct 2018 6:18 UTC
46 points
8 comments8 min readLW link

The Won­der of Evolution

Eliezer Yudkowsky2 Nov 2007 20:49 UTC
75 points
85 comments4 min readLW link

As­sor­ta­tive Mat­ing And Autism

Scott Alexander28 Jan 2020 18:20 UTC
47 points
2 comments4 min readLW link

[link] Back to the trees

[deleted]4 Nov 2011 22:06 UTC
131 points
47 comments2 min readLW link

The Oc­to­pus, the Dolphin and Us: a Great Filter tale

Stuart_Armstrong3 Sep 2014 21:37 UTC
76 points
236 comments3 min readLW link

Your Evolved Intuitions

lukeprog5 May 2011 16:21 UTC
21 points
106 comments10 min readLW link

Why would evolu­tion fa­vor more bad?

KatjaGrace6 Oct 2013 18:26 UTC
1 point
0 comments3 min readLW link

The Psy­cholog­i­cal Diver­sity of Mankind

Kaj_Sotala9 May 2010 5:53 UTC
136 points
162 comments7 min readLW link

The Psy­cholog­i­cal Unity of Humankind

Eliezer Yudkowsky24 Jun 2008 7:12 UTC
54 points
23 comments4 min readLW link

Thou Art Godshatter

Eliezer Yudkowsky13 Nov 2007 19:38 UTC
205 points
81 comments5 min readLW link

In Search of Slack

Martin Sustrik23 May 2020 11:20 UTC
47 points
3 comments6 min readLW link

Grow­ing Up is Hard

Eliezer Yudkowsky4 Jan 2009 3:55 UTC
49 points
41 comments7 min readLW link

At­ten­tion to snakes not fear of snakes: evolu­tion en­cod­ing en­vi­ron­men­tal knowl­edge in periph­eral systems

Kaj_Sotala2 Oct 2020 11:50 UTC
44 points
1 comment3 min readLW link

The Dar­win Game

lsusr9 Oct 2020 10:19 UTC
90 points
136 comments3 min readLW link

Sny­der-Beat­tie, Sand­berg, Drexler & Bon­sall (2020): The Timing of Evolu­tion­ary Tran­si­tions Suggests In­tel­li­gent Life Is Rare

Kaj_Sotala24 Nov 2020 10:36 UTC
83 points
20 comments2 min readLW link

Against evolu­tion as an anal­ogy for how hu­mans will cre­ate AGI

Steven Byrnes23 Mar 2021 12:29 UTC
46 points
25 comments25 min readLW link

An­thropic Effects in Es­ti­mat­ing Evolu­tion Difficulty

Mark Xu5 Jul 2021 4:02 UTC
12 points
2 comments3 min readLW link

The two-headed bacterium

Malmesbury10 Aug 2021 15:28 UTC
65 points
4 comments7 min readLW link

[Book Re­view] “The Vi­tal Ques­tion” by Nick Lane

lsusr27 Sep 2021 22:50 UTC
70 points
26 comments7 min readLW link

Con­tra Paul Chris­ti­ano on Sex

George3d61 Oct 2021 11:17 UTC
22 points
19 comments9 min readLW link

[Book Re­view] Evolu­tion of Sex

Alex Hollow3 Oct 2021 18:54 UTC
36 points
0 comments8 min readLW link

Book Re­view: Why Every­one (Else) Is a Hypocrite

PeterMcCluskey9 Oct 2021 3:31 UTC
26 points
7 comments3 min readLW link

[Question] Do you like ex­ces­sive sugar?

amaury lorin9 Oct 2021 10:40 UTC
3 points
11 comments1 min readLW link

[Question] To­tal com­pute available to evolution

redbird9 Jan 2022 3:25 UTC
16 points
19 comments1 min readLW link

Whence the sexes?

Richard_Ngo13 Feb 2022 20:40 UTC
61 points
35 comments3 min readLW link

Re­plac­ing Nat­u­ral Interpretations

adamShimi16 Mar 2022 13:05 UTC
16 points
0 comments7 min readLW link

Ex­am­in­ing Evolu­tion as an Up­per Bound for AGI Timelines

meanderingmoose24 Apr 2022 19:08 UTC
5 points
1 comment9 min readLW link

We haven’t quit evolu­tion [short]

the gears to ascension6 Jun 2022 19:07 UTC
5 points
3 comments2 min readLW link

Hu­man val­ues & bi­ases are in­ac­cessible to the genome

TurnTrout7 Jul 2022 17:29 UTC
88 points
51 comments6 min readLW link

How evolu­tion suc­ceeds and fails at value alignment

Ocracoke21 Aug 2022 7:14 UTC
21 points
2 comments4 min readLW link

Orexin and the quest for more wak­ing hours

ChristianKl24 Sep 2022 19:54 UTC
124 points
37 comments5 min readLW link

The her­i­ta­bil­ity of hu­man val­ues: A be­hav­ior ge­netic cri­tique of Shard Theory

geoffreymiller20 Oct 2022 15:51 UTC
66 points
59 comments21 min readLW link

Les­sons from Con­ver­gent Evolu­tion for AI Alignment

27 Mar 2023 16:25 UTC
41 points
9 comments8 min readLW link

AI and Evolution

Dan H30 Mar 2023 12:56 UTC
21 points
4 comments2 min readLW link

The Dark Mir­a­cle of Optics

Suspended Reason24 Jun 2020 3:09 UTC
27 points
5 comments8 min readLW link

Refer­ences & Re­sources for LessWrong

XiXiDu10 Oct 2010 14:54 UTC
156 points
106 comments20 min readLW link

He­donic asymmetries

paulfchristiano26 Jan 2020 2:10 UTC
97 points
22 comments2 min readLW link

Refram­ing the evolu­tion­ary benefit of sex

paulfchristiano14 Sep 2019 17:00 UTC
89 points
22 comments2 min readLW link

Win­ning is for Losers

Jacob Falkovich11 Oct 2017 4:01 UTC
31 points
15 comments18 min readLW link

Notes From an Apocalypse

Toggle22 Sep 2017 5:10 UTC
56 points
25 comments14 min readLW link

You’re En­ti­tled to Ar­gu­ments, But Not (That Par­tic­u­lar) Proof

Eliezer Yudkowsky15 Feb 2010 7:58 UTC
74 points
229 comments8 min readLW link

You’re in New­comb’s Box

HonoreDB5 Feb 2011 20:46 UTC
59 points
176 comments4 min readLW link

Hu­mans in Funny Suits

Eliezer Yudkowsky30 Jul 2008 23:54 UTC
64 points
132 comments7 min readLW link

An­thro­po­mor­phic Optimism

Eliezer Yudkowsky4 Aug 2008 20:17 UTC
68 points
59 comments5 min readLW link

Group se­lec­tion update

PhilGoetz1 Nov 2010 16:51 UTC
48 points
67 comments5 min readLW link

Three Fal­la­cies of Teleology

Eliezer Yudkowsky25 Aug 2008 22:27 UTC
34 points
16 comments9 min readLW link

[Question] Why do hu­mans not have built-in neu­ral i/​o chan­nels?

Richard_Ngo8 Aug 2019 13:09 UTC
25 points
24 comments1 min readLW link

What strange and an­cient things might we find be­neath the ice?

Benquo15 Jan 2018 10:10 UTC
16 points
2 comments2 min readLW link

Is That Your True Re­jec­tion? by Eliezer Yud­kowsky @ Cato Unbound

XiXiDu7 Sep 2011 18:27 UTC
44 points
83 comments1 min readLW link

What is the group se­lec­tion de­bate?

Academian2 Nov 2010 2:02 UTC
37 points
16 comments3 min readLW link

Nat­u­ral Selec­tion’s Speed Limit and Com­plex­ity Bound

Eliezer Yudkowsky4 Nov 2007 16:54 UTC
11 points
105 comments5 min readLW link

More Ques­tions about Trees

digital_carver9 Oct 2020 8:35 UTC
3 points
5 comments1 min readLW link

Mo­du­lar­ity and Buzzy

Kaj_Sotala4 Aug 2011 11:35 UTC
33 points
27 comments9 min readLW link

Why math­e­mat­ics works

Douglas_Reay8 Mar 2018 18:00 UTC
7 points
4 comments5 min readLW link

A Failed Just-So Story

Eliezer Yudkowsky5 Jan 2008 6:35 UTC
17 points
49 comments2 min readLW link

“In­ner Align­ment Failures” Which Are Ac­tu­ally Outer Align­ment Failures

johnswentworth31 Oct 2020 20:18 UTC
61 points
38 comments5 min readLW link

Ob­serv­ing Optimization

Eliezer Yudkowsky21 Nov 2008 5:39 UTC
12 points
28 comments6 min readLW link

Build­ing Some­thing Smarter

Eliezer Yudkowsky2 Nov 2008 17:00 UTC
23 points
57 comments4 min readLW link

Mus­ings on Cu­mu­la­tive Cul­tural Evolu­tion and AI

calebo7 Jul 2019 16:46 UTC
19 points
5 comments7 min readLW link

An ap­peal for vi­tamin D sup­ple­men­ta­tion as a pro­phy­lac­tic for coro­n­aviruses and in­fluenza and a sim­ple evolu­tion­ary the­ory for why this is plau­si­ble.

Michael A22 Dec 2020 19:40 UTC
11 points
1 comment9 min readLW link

Evolu­tion and fit­ness vs self-aware­ness and memetics

FractalParrot29 Nov 2020 21:07 UTC
3 points
2 comments3 min readLW link

Are we all mis­al­igned?

Mateusz Mazurkiewicz3 Jan 2021 2:42 UTC
11 points
0 comments5 min readLW link

On the na­ture of pur­pose

Nora_Ammann22 Jan 2021 8:30 UTC
28 points
15 comments9 min readLW link

Evolu­tions Build­ing Evolu­tions: Lay­ers of Gen­er­ate and Test

plex5 Feb 2021 18:21 UTC
11 points
1 comment6 min readLW link

Idea selection

krbouchard1 Mar 2021 14:07 UTC
1 point
0 comments2 min readLW link

Fish­e­rian Ru­n­away as a de­ci­sion-the­o­retic problem

Bunthut20 Mar 2021 16:34 UTC
11 points
0 comments3 min readLW link

Some real ex­am­ples of gra­di­ent hacking

Oliver Sourbut22 Nov 2021 0:11 UTC
15 points
8 comments2 min readLW link

Se­cond-or­der se­lec­tion against the immortal

Malmesbury3 Dec 2021 5:01 UTC
44 points
47 comments6 min readLW link

Mo­ti­va­tions, Nat­u­ral Selec­tion, and Cur­ricu­lum Engineering

Oliver Sourbut16 Dec 2021 1:07 UTC
16 points
0 comments42 min readLW link

The Ge­net­ics of Space Ama­zons

Jan Christian Refsgaard30 Dec 2021 22:14 UTC
12 points
13 comments5 min readLW link

Reg­u­lariza­tion Causes Mo­du­lar­ity Causes Generalization

dkirmani1 Jan 2022 23:34 UTC
50 points
7 comments3 min readLW link

Can we simu­late hu­man evolu­tion to cre­ate a some­what al­igned AGI?

Thomas Kwa28 Mar 2022 22:55 UTC
21 points
7 comments7 min readLW link

Is Fish­e­rian Ru­n­away Gra­di­ent Hack­ing?

Ryan Kidd10 Apr 2022 13:47 UTC
15 points
7 comments4 min readLW link

The Fourth Arena: What’s Up in the world these days? We’re mov­ing to a new, a new what?

Bill Benzon4 Jun 2022 19:07 UTC
2 points
0 comments3 min readLW link

The Fourth Arena 2: New be­ings in time

Bill Benzon5 Jun 2022 13:30 UTC
1 point
0 comments2 min readLW link

De­liber­a­tion Every­where: Sim­ple Examples

Oliver Sourbut27 Jun 2022 17:26 UTC
14 points
0 comments15 min readLW link

Na­ture ab­hors an im­mutable repli­ca­tor… usually

MSRayne3 Jul 2022 15:08 UTC
26 points
8 comments3 min readLW link

Evolu­tion Doesn’t Have Feelings

Matt Goldwater3 Jul 2022 17:13 UTC
−1 points
0 comments1 min readLW link

The Dumbest Pos­si­ble Gets There First

Artaxerxes13 Aug 2022 10:20 UTC
41 points
7 comments2 min readLW link

Do bam­boos set them­selves on fire?

Malmesbury19 Sep 2022 15:34 UTC
146 points
13 comments6 min readLW link

What “The Mes­sage” Was For Me

Alex Beyman11 Oct 2022 8:08 UTC
−3 points
14 comments4 min readLW link

Every­body Comes Back

Alex Beyman24 Sep 2022 23:53 UTC
8 points
0 comments27 min readLW link


Alex Beyman1 Oct 2022 20:16 UTC
−1 points
7 comments43 min readLW link

Take­off speeds, the chimps anal­ogy, and the Cul­tural In­tel­li­gence Hypothesis

NickGabs2 Dec 2022 19:14 UTC
16 points
2 comments4 min readLW link

Could evolu­tion pro­duce some­thing truly al­igned with its own op­ti­miza­tion stan­dards? What would an an­swer to this mean for AI al­ign­ment?

No77e8 Jan 2023 11:04 UTC
3 points
4 comments1 min readLW link

Where are the cryp­to­graphic times­tamps of all data from bio-safety level 4 lab­o­ra­to­ries?

Joseph Van Name7 Feb 2023 18:47 UTC
−1 points
0 comments3 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
36 points
4 comments26 min readLW link

Why I’m Skep­ti­cal of De-Extinction

Niko_McCarty23 Feb 2023 19:42 UTC
16 points
1 comment11 min readLW link

The Answer

Alex Beyman19 Mar 2023 0:09 UTC
1 point
0 comments4 min readLW link

The Pa­tent Clerk

Alex Beyman25 Mar 2023 15:58 UTC
13 points
5 comments4 min readLW link
No comments.