RSS

AI Takeoff

TagLast edit: 14 Sep 2020 23:32 UTC by Ruby

AI Takeoff refers to the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as “human-level”) to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., “soft” vs “hard”.

See also: AI Timelines, Seed AI, Singularity, Intelligence explosion, Recursive self-improvement

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement, and software-based strong AGI [1]. By maintaining control of the AGI’s ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going “FOOM” [2]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom, and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seem to have a large impact in a mind’s general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [3].

Notable posts

External links

References

  1. http://​​www.aleph.se/​​andart/​​archives/​​2010/​​10/​​why_early_singularities_are_softer.html

  2. http://​​lesswrong.com/​​lw/​​63t/​​requirements_for_ai_to_go_foom/​​

  3. http://​​lesswrong.com/​​lw/​​wf/​​hard_takeoff/​​

Soft take­off can still lead to de­ci­sive strate­gic advantage

Daniel Kokotajlo23 Aug 2019 16:39 UTC
117 points
46 comments8 min readLW link4 reviews

New re­port: In­tel­li­gence Ex­plo­sion Microeconomics

Eliezer Yudkowsky29 Apr 2013 23:14 UTC
72 points
251 comments3 min readLW link

AlphaGo Zero and the Foom Debate

Eliezer Yudkowsky21 Oct 2017 2:18 UTC
81 points
17 comments3 min readLW link

Will AI un­dergo dis­con­tin­u­ous progress?

Sammy Martin21 Feb 2020 22:16 UTC
25 points
21 comments20 min readLW link

Ar­gu­ments about fast takeoff

paulfchristiano25 Feb 2018 4:53 UTC
75 points
64 comments2 min readLW link1 review
(sideways-view.com)

Will AI See Sud­den Progress?

KatjaGrace26 Feb 2018 0:41 UTC
27 points
11 comments1 min readLW link1 review

Dis­con­tin­u­ous progress in his­tory: an update

KatjaGrace14 Apr 2020 0:00 UTC
178 points
25 comments31 min readLW link1 review
(aiimpacts.org)

Quick Nate/​Eliezer com­ments on discontinuity

Rob Bensinger1 Mar 2018 22:03 UTC
43 points
1 comment2 min readLW link

Take­off Speeds and Discontinuities

30 Sep 2021 13:50 UTC
56 points
1 comment15 min readLW link

Against GDP as a met­ric for timelines and take­off speeds

Daniel Kokotajlo29 Dec 2020 17:42 UTC
130 points
15 comments14 min readLW link1 review

Model­ling Con­tin­u­ous Progress

Sammy Martin23 Jun 2020 18:06 UTC
29 points
3 comments7 min readLW link

Pos­si­ble take­aways from the coro­n­avirus pan­demic for slow AI takeoff

Vika31 May 2020 17:51 UTC
135 points
36 comments3 min readLW link1 review

Dist­in­guish­ing defi­ni­tions of takeoff

Matthew Barnett14 Feb 2020 0:16 UTC
57 points
6 comments6 min readLW link

My cur­rent frame­work for think­ing about AGI timelines

zhukeepa30 Mar 2020 1:23 UTC
106 points
5 comments3 min readLW link

Con­tin­u­ing the take­offs debate

Richard_Ngo23 Nov 2020 15:58 UTC
67 points
13 comments9 min readLW link

Re­view of Soft Take­off Can Still Lead to DSA

Daniel Kokotajlo10 Jan 2021 18:10 UTC
75 points
15 comments6 min readLW link

Analo­gies and Gen­eral Pri­ors on Intelligence

20 Aug 2021 21:03 UTC
57 points
12 comments14 min readLW link

Towards a For­mal­i­sa­tion of Re­turns on Cog­ni­tive Rein­vest­ment (Part 1)

𝕮𝖎𝖓𝖊𝖗𝖆4 Jun 2022 18:42 UTC
17 points
8 comments13 min readLW link

The date of AI Takeover is not the day the AI takes over

Daniel Kokotajlo22 Oct 2020 10:41 UTC
118 points
32 comments2 min readLW link1 review

AI take­off story: a con­tinu­a­tion of progress by other means

Edouard Harris27 Sep 2021 15:55 UTC
74 points
13 comments10 min readLW link

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Eliezer Yudkowsky22 Nov 2021 19:35 UTC
189 points
169 comments60 min readLW link

More Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

6 Dec 2021 20:33 UTC
84 points
30 comments40 min readLW link

Every­thing I Need To Know About Take­off Speeds I Learned From Air Con­di­tioner Rat­ings On Amazon

johnswentworth15 Apr 2022 19:05 UTC
147 points
125 comments5 min readLW link

How do take­off speeds af­fect the prob­a­bil­ity of bad out­comes from AGI?

KR29 Jun 2020 22:06 UTC
15 points
2 comments8 min readLW link

Take­off Speed: Sim­ple Asymp­totics in a Toy Model.

Aaron Roth5 Mar 2018 17:07 UTC
21 points
21 comments9 min readLW link
(aaronsadventures.blogspot.com)

My Thoughts on Take­off Speeds

tristanm27 Mar 2018 0:05 UTC
10 points
2 comments7 min readLW link

Fast Take­off in Biolog­i­cal Intelligence

eapache25 Apr 2020 12:21 UTC
14 points
21 comments2 min readLW link

Some con­cep­tual high­lights from “Disjunc­tive Sce­nar­ios of Catas­trophic AI Risk”

Kaj_Sotala12 Feb 2018 12:30 UTC
33 points
4 comments6 min readLW link
(kajsotala.fi)

Mis­con­cep­tions about con­tin­u­ous takeoff

Matthew Barnett8 Oct 2019 21:31 UTC
79 points
38 comments4 min readLW link

More on dis­am­biguat­ing “dis­con­ti­nu­ity”

Aryeh Englander9 Jun 2020 15:16 UTC
16 points
1 comment3 min readLW link

Why AI may not foom

John_Maxwell24 Mar 2013 8:11 UTC
29 points
81 comments12 min readLW link

S-Curves for Trend Forecasting

Matt Goldenberg23 Jan 2019 18:17 UTC
99 points
22 comments7 min readLW link4 reviews

Fac­to­rio, Ac­celerando, Em­pathiz­ing with Em­pires and Moder­ate Takeoffs

Raemon4 Feb 2018 2:33 UTC
49 points
19 comments4 min readLW link

[Question] Any re­but­tals of Chris­ti­ano and AI Im­pacts on take­off speeds?

SoerenMind21 Apr 2019 20:39 UTC
67 points
26 comments1 min readLW link

What Ev­i­dence Is AlphaGo Zero Re AGI Com­plex­ity?

RobinHanson22 Oct 2017 2:28 UTC
36 points
47 comments2 min readLW link

For FAI: Is “Molec­u­lar Nan­otech­nol­ogy” putting our best foot for­ward?

leplen22 Jun 2013 4:44 UTC
79 points
118 comments3 min readLW link

A sum­mary of the Han­son-Yud­kowsky FOOM debate

Kaj_Sotala15 Nov 2012 7:25 UTC
36 points
10 comments1 min readLW link

[Question] Prob­a­bil­ity that other ar­chi­tec­tures will scale as well as Trans­form­ers?

Daniel Kokotajlo28 Jul 2020 19:36 UTC
22 points
4 comments1 min readLW link

Cas­cades, Cy­cles, In­sight...

Eliezer Yudkowsky24 Nov 2008 9:33 UTC
24 points
31 comments8 min readLW link

...Re­cur­sion, Magic

Eliezer Yudkowsky25 Nov 2008 9:10 UTC
24 points
28 comments5 min readLW link

hu­man psy­chol­in­guists: a crit­i­cal appraisal

nostalgebraist31 Dec 2019 0:20 UTC
167 points
59 comments16 min readLW link2 reviews
(nostalgebraist.tumblr.com)

[AN #97]: Are there his­tor­i­cal ex­am­ples of large, ro­bust dis­con­ti­nu­ities?

Rohin Shah29 Apr 2020 17:30 UTC
15 points
0 comments10 min readLW link
(mailchi.mp)

Stan­ford En­cy­clo­pe­dia of Philos­o­phy on AI ethics and superintelligence

Kaj_Sotala2 May 2020 7:35 UTC
43 points
19 comments7 min readLW link
(plato.stanford.edu)

Papers for 2017

Kaj_Sotala4 Jan 2018 13:30 UTC
12 points
2 comments2 min readLW link
(kajsotala.fi)

AI Align­ment 2018-19 Review

Rohin Shah28 Jan 2020 2:19 UTC
125 points
6 comments35 min readLW link

[LINK] What should a rea­son­able per­son be­lieve about the Sin­gu­lar­ity?

Kaj_Sotala13 Jan 2011 9:32 UTC
38 points
14 comments2 min readLW link

Pre­face to the se­quence on eco­nomic growth

Matthew Barnett27 Aug 2020 20:29 UTC
51 points
0 comments4 min readLW link

[Question] Re­sponses to Chris­ti­ano on take­off speeds?

Richard_Ngo30 Oct 2020 15:16 UTC
29 points
8 comments1 min readLW link

[Question] Math­e­mat­i­cal Models of Progress?

abramdemski16 Feb 2021 0:21 UTC
28 points
8 comments2 min readLW link

Life and ex­pand­ing steer­able consequences

Alex Flint7 May 2021 18:33 UTC
46 points
3 comments4 min readLW link

[Question] Is driv­ing worth the risk?

adamzerner11 May 2021 5:04 UTC
26 points
29 comments7 min readLW link

Hard Takeoff

Eliezer Yudkowsky2 Dec 2008 20:44 UTC
30 points
33 comments11 min readLW link

What 2026 looks like

Daniel Kokotajlo6 Aug 2021 16:14 UTC
326 points
67 comments16 min readLW link

[Question] Is there a name for the the­ory that “There will be fast take­off in real-world ca­pa­bil­ities be­cause al­most ev­ery­thing is AGI-com­plete”?

capybaralet2 Sep 2021 23:00 UTC
31 points
8 comments1 min readLW link

[Question] What’s the like­li­hood of only sub ex­po­nen­tial growth for AGI?

M. Y. Zuo13 Nov 2021 22:46 UTC
5 points
22 comments1 min readLW link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

18 Nov 2021 22:19 UTC
127 points
61 comments39 min readLW link

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

25 Nov 2021 16:45 UTC
115 points
95 comments68 min readLW link

Shul­man and Yud­kowsky on AI progress

3 Dec 2021 20:05 UTC
91 points
16 comments20 min readLW link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

9 Dec 2021 21:23 UTC
106 points
31 comments31 min readLW link

My Overview of the AI Align­ment Land­scape: A Bird’s Eye View

Neel Nanda15 Dec 2021 23:44 UTC
104 points
9 comments16 min readLW link

Brain Effi­ciency: Much More than You Wanted to Know

jacob_cannell6 Jan 2022 3:38 UTC
155 points
78 comments28 min readLW link

Wargam­ing AGI Development

ryan_b19 Mar 2022 17:59 UTC
35 points
13 comments5 min readLW link

Jeff Shain­line thinks that there is too much serendipity in the physics of op­ti­cal/​su­per­con­duct­ing com­put­ing, sug­gest­ing that they were part of the crite­ria of Cos­molog­i­cal Nat­u­ral Selec­tion, which could have some fairly love­craf­tian implications

MakoYass1 Apr 2022 7:09 UTC
14 points
3 comments26 min readLW link

Hyper­bolic takeoff

Ege Erdil9 Apr 2022 15:57 UTC
17 points
8 comments10 min readLW link
(www.metaculus.com)

Take­off speeds have a huge effect on what it means to work on AI x-risk

Buck13 Apr 2022 17:38 UTC
115 points
24 comments2 min readLW link

For ev­ery choice of AGI difficulty, con­di­tion­ing on grad­ual take-off im­plies shorter timelines.

Francis Rhys Ward21 Apr 2022 7:44 UTC
29 points
13 comments3 min readLW link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew_Critch13 May 2022 17:24 UTC
68 points
12 comments4 min readLW link

Frame for Take-Off Speeds to in­form com­pute gov­er­nance & scal­ing alignment

Logan Riggs13 May 2022 22:23 UTC
14 points
2 comments2 min readLW link

Con­ti­nu­ity Assumptions

Jan_Kulveit13 Jun 2022 21:31 UTC
24 points
13 comments4 min readLW link

Loose thoughts on AGI risk

Yitz23 Jun 2022 1:02 UTC
7 points
3 comments1 min readLW link

An­nounc­ing Epoch: A re­search or­ga­ni­za­tion in­ves­ti­gat­ing the road to Trans­for­ma­tive AI

27 Jun 2022 13:55 UTC
88 points
2 comments2 min readLW link
(epochai.org)

[Question] What are the mostly likely ways AGI will emerge?

Craig Quiter14 Jul 2020 0:58 UTC
3 points
7 comments1 min readLW link

The abrupt­ness of nu­clear weapons

paulfchristiano25 Feb 2018 17:40 UTC
46 points
35 comments2 min readLW link

Might hu­mans not be the most in­tel­li­gent an­i­mals?

Matthew Barnett23 Dec 2019 21:50 UTC
55 points
41 comments3 min readLW link

[Question] How com­mon is it for one en­tity to have a 3+ year tech­nolog­i­cal lead on its near­est com­peti­tor?

Daniel Kokotajlo17 Nov 2019 15:23 UTC
49 points
20 comments1 min readLW link

AGI will dras­ti­cally in­crease economies of scale

Wei_Dai7 Jun 2019 23:17 UTC
51 points
25 comments2 min readLW link

In Defense of the Arms Races… that End Arms Races

Gentzel15 Jan 2020 21:30 UTC
38 points
9 comments3 min readLW link
(theconsequentialist.wordpress.com)

Why no to­tal win­ner?

Paul Crowley15 Oct 2017 22:01 UTC
36 points
19 comments2 min readLW link

Sur­prised by Brains

Eliezer Yudkowsky23 Nov 2008 7:26 UTC
54 points
28 comments7 min readLW link

Dou­ble Crux­ing the AI Foom debate

agilecaveman27 Apr 2018 6:46 UTC
17 points
3 comments11 min readLW link

What I Think, If Not Why

Eliezer Yudkowsky11 Dec 2008 17:41 UTC
37 points
103 comments4 min readLW link

Muehlhauser-Go­ertzel Dialogue, Part 1

lukeprog16 Mar 2012 17:12 UTC
42 points
161 comments33 min readLW link

Levels of AI Self-Im­prove­ment

avturchin29 Apr 2018 11:45 UTC
11 points
0 comments39 min readLW link

Su­per­in­tel­li­gence 6: In­tel­li­gence ex­plo­sion kinetics

KatjaGrace21 Oct 2014 1:00 UTC
15 points
68 comments8 min readLW link

What if AI doesn’t quite go FOOM?

Mass_Driver20 Jun 2010 0:03 UTC
16 points
191 comments5 min readLW link

Se­cu­rity Mind­set and Take­off Speeds

DanielFilan27 Oct 2020 3:20 UTC
54 points
23 comments8 min readLW link
(danielfilan.com)

En­gelbart: In­suffi­ciently Recursive

Eliezer Yudkowsky26 Nov 2008 8:31 UTC
19 points
22 comments7 min readLW link

True Sources of Disagreement

Eliezer Yudkowsky8 Dec 2008 15:51 UTC
11 points
53 comments8 min readLW link

Should we post­pone AGI un­til we reach safety?

otto.barten18 Nov 2020 15:43 UTC
26 points
36 comments3 min readLW link

[Question] Poll: Which vari­ables are most strate­gi­cally rele­vant?

22 Jan 2021 17:17 UTC
32 points
34 comments1 min readLW link

A suffi­ciently para­noid non-Friendly AGI might self-mod­ify it­self to be­come Friendly

RomanS22 Sep 2021 6:29 UTC
5 points
2 comments1 min readLW link

A Frame­work of Pre­dic­tion Technologies

isaduan3 Oct 2021 10:26 UTC
8 points
2 comments9 min readLW link

In­fer­ence cost limits the im­pact of ever larger models

SoerenMind23 Oct 2021 10:51 UTC
36 points
28 comments2 min readLW link

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

29 Nov 2021 19:26 UTC
117 points
34 comments40 min readLW link

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

anson.ho11 Dec 2021 1:19 UTC
14 points
0 comments42 min readLW link

Re­searcher in­cen­tives cause smoother progress on bench­marks

ryan_greenblatt21 Dec 2021 4:13 UTC
20 points
4 comments1 min readLW link

Po­ten­tial gears level ex­pla­na­tions of smooth progress

ryan_greenblatt22 Dec 2021 18:05 UTC
4 points
2 comments2 min readLW link

Ques­tion 5: The timeline hyperparameter

Cameron Berg14 Feb 2022 16:38 UTC
5 points
3 comments7 min readLW link

[Question] What would make you con­fi­dent that AGI has been achieved?

Yitz29 Mar 2022 23:02 UTC
17 points
6 comments1 min readLW link

It Looks Like You’re Try­ing To Take Over The World

gwern9 Mar 2022 16:35 UTC
376 points
124 comments1 min readLW link
(www.gwern.net)

[Question] What is be­ing im­proved in re­cur­sive self im­prove­ment?

Conor Sullivan25 Apr 2022 18:30 UTC
6 points
7 comments1 min readLW link

Why Copi­lot Ac­cel­er­ates Timelines

Michaël Trazzi26 Apr 2022 22:06 UTC
31 points
14 comments7 min readLW link

AI Alter­na­tive Fu­tures: Sce­nario Map­ping Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion (*Edit*)

Kakili27 Apr 2022 22:07 UTC
10 points
2 comments9 min readLW link

The Hard In­tel­li­gence Hy­poth­e­sis and Its Bear­ing on Suc­ces­sion In­duced Foom

𝕮𝖎𝖓𝖊𝖗𝖆31 May 2022 19:04 UTC
10 points
7 comments4 min readLW link

We will be around in 30 years

mukashi7 Jun 2022 3:47 UTC
13 points
205 comments2 min readLW link

Agent level parallelism

Johannes C. Mayer18 Jun 2022 20:56 UTC
6 points
5 comments1 min readLW link

Is Gen­eral In­tel­li­gence “Com­pact”?

𝕮𝖎𝖓𝖊𝖗𝖆26 Jun 2022 12:21 UTC
7 points
4 comments12 min readLW link
No comments.