RSS

AI Takeoff

TagLast edit: 14 Sep 2020 23:32 UTC by Ruby

AI Takeoff refers to the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as “human-level”) to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., “soft” vs “hard”.

See also: AI Timelines, Seed AI, Singularity, Intelligence explosion, Recursive self-improvement

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement, and software-based strong AGI [1]. By maintaining control of the AGI’s ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going “FOOM” [2]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom, and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seem to have a large impact in a mind’s general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [3].

Notable posts

External links

References

  1. http://​​www.aleph.se/​​andart/​​archives/​​2010/​​10/​​why_early_singularities_are_softer.html

  2. http://​​lesswrong.com/​​lw/​​63t/​​requirements_for_ai_to_go_foom/​​

  3. http://​​lesswrong.com/​​lw/​​wf/​​hard_takeoff/​​

Soft take­off can still lead to de­ci­sive strate­gic advantage

Daniel Kokotajlo23 Aug 2019 16:39 UTC
113 points
46 comments8 min readLW link2 nominations4 reviews

New re­port: In­tel­li­gence Ex­plo­sion Microeconomics

Eliezer Yudkowsky29 Apr 2013 23:14 UTC
72 points
251 comments3 min readLW link

AlphaGo Zero and the Foom Debate

Eliezer Yudkowsky21 Oct 2017 2:18 UTC
78 points
16 comments3 min readLW link

Will AI un­dergo dis­con­tin­u­ous progress?

SDM21 Feb 2020 22:16 UTC
25 points
20 comments20 min readLW link

Ar­gu­ments about fast takeoff

paulfchristiano25 Feb 2018 4:53 UTC
68 points
63 comments2 min readLW link
(sideways-view.com)

Will AI See Sud­den Progress?

KatjaGrace26 Feb 2018 0:41 UTC
25 points
11 comments1 min readLW link

Dis­con­tin­u­ous progress in his­tory: an update

KatjaGrace14 Apr 2020 0:00 UTC
163 points
23 comments31 min readLW link
(aiimpacts.org)

Quick Nate/​Eliezer com­ments on discontinuity

Rob Bensinger1 Mar 2018 22:03 UTC
42 points
1 comment2 min readLW link

Against GDP as a met­ric for timelines and take­off speeds

Daniel Kokotajlo29 Dec 2020 17:42 UTC
112 points
13 comments14 min readLW link

Model­ling Con­tin­u­ous Progress

SDM23 Jun 2020 18:06 UTC
29 points
3 comments7 min readLW link

Pos­si­ble take­aways from the coro­n­avirus pan­demic for slow AI takeoff

Vika31 May 2020 17:51 UTC
128 points
35 comments3 min readLW link

Dist­in­guish­ing defi­ni­tions of takeoff

Matthew Barnett14 Feb 2020 0:16 UTC
53 points
6 comments6 min readLW link

My cur­rent frame­work for think­ing about AGI timelines

zhukeepa30 Mar 2020 1:23 UTC
101 points
5 comments3 min readLW link

Con­tin­u­ing the take­offs debate

Richard_Ngo23 Nov 2020 15:58 UTC
65 points
13 comments9 min readLW link

Re­view of Soft Take­off Can Still Lead to DSA

Daniel Kokotajlo10 Jan 2021 18:10 UTC
72 points
13 comments6 min readLW link

The date of AI Takeover is not the day the AI takes over

Daniel Kokotajlo22 Oct 2020 10:41 UTC
97 points
23 comments2 min readLW link

How do take­off speeds af­fect the prob­a­bil­ity of bad out­comes from AGI?

KR29 Jun 2020 22:06 UTC
15 points
2 comments8 min readLW link

[Question] If the “one cor­ti­cal al­gorithm” hy­poth­e­sis is true, how should one up­date about timelines and take­off speed?

Adam Scholl26 Aug 2019 7:08 UTC
23 points
9 comments1 min readLW link

Take­off Speed: Sim­ple Asymp­totics in a Toy Model.

Aaron Roth5 Mar 2018 17:07 UTC
21 points
21 comments9 min readLW link
(aaronsadventures.blogspot.com)

My Thoughts on Take­off Speeds

tristanm27 Mar 2018 0:05 UTC
10 points
2 comments7 min readLW link

Fast Take­off in Biolog­i­cal Intelligence

eapache25 Apr 2020 12:21 UTC
14 points
21 comments2 min readLW link

Some con­cep­tual high­lights from “Disjunc­tive Sce­nar­ios of Catas­trophic AI Risk”

Kaj_Sotala12 Feb 2018 12:30 UTC
29 points
4 comments6 min readLW link
(kajsotala.fi)

Mis­con­cep­tions about con­tin­u­ous takeoff

Matthew Barnett8 Oct 2019 21:31 UTC
71 points
38 comments4 min readLW link1 nomination

More on dis­am­biguat­ing “dis­con­ti­nu­ity”

alenglander9 Jun 2020 15:16 UTC
16 points
1 comment3 min readLW link

Why AI may not foom

John_Maxwell24 Mar 2013 8:11 UTC
28 points
81 comments12 min readLW link

S-Curves for Trend Forecasting

Matt Goldenberg23 Jan 2019 18:17 UTC
99 points
22 comments7 min readLW link2 nominations4 reviews

Fac­to­rio, Ac­celerando, Em­pathiz­ing with Em­pires and Moder­ate Takeoffs

Raemon4 Feb 2018 2:33 UTC
37 points
19 comments4 min readLW link

[Question] Any re­but­tals of Chris­ti­ano and AI Im­pacts on take­off speeds?

SoerenMind21 Apr 2019 20:39 UTC
65 points
22 comments1 min readLW link

What Ev­i­dence Is AlphaGo Zero Re AGI Com­plex­ity?

RobinHanson22 Oct 2017 2:28 UTC
36 points
47 comments2 min readLW link

For FAI: Is “Molec­u­lar Nan­otech­nol­ogy” putting our best foot for­ward?

leplen22 Jun 2013 4:44 UTC
78 points
118 comments3 min readLW link

A sum­mary of the Han­son-Yud­kowsky FOOM debate

Kaj_Sotala15 Nov 2012 7:25 UTC
36 points
10 comments1 min readLW link

[Question] Prob­a­bil­ity that other ar­chi­tec­tures will scale as well as Trans­form­ers?

Daniel Kokotajlo28 Jul 2020 19:36 UTC
22 points
4 comments1 min readLW link

Cas­cades, Cy­cles, In­sight...

Eliezer Yudkowsky24 Nov 2008 9:33 UTC
23 points
31 comments8 min readLW link

...Re­cur­sion, Magic

Eliezer Yudkowsky25 Nov 2008 9:10 UTC
22 points
28 comments5 min readLW link

hu­man psy­chol­in­guists: a crit­i­cal appraisal

nostalgebraist31 Dec 2019 0:20 UTC
155 points
57 comments16 min readLW link2 nominations2 reviews
(nostalgebraist.tumblr.com)

[AN #97]: Are there his­tor­i­cal ex­am­ples of large, ro­bust dis­con­ti­nu­ities?

rohinmshah29 Apr 2020 17:30 UTC
15 points
0 comments10 min readLW link
(mailchi.mp)

Stan­ford En­cy­clo­pe­dia of Philos­o­phy on AI ethics and superintelligence

Kaj_Sotala2 May 2020 7:35 UTC
41 points
19 comments7 min readLW link
(plato.stanford.edu)

Papers for 2017

Kaj_Sotala4 Jan 2018 13:30 UTC
12 points
2 comments2 min readLW link
(kajsotala.fi)

AI Align­ment 2018-19 Review

rohinmshah28 Jan 2020 2:19 UTC
115 points
6 comments35 min readLW link

[LINK] What should a rea­son­able per­son be­lieve about the Sin­gu­lar­ity?

Kaj_Sotala13 Jan 2011 9:32 UTC
38 points
14 comments2 min readLW link

Pre­face to the se­quence on eco­nomic growth

Matthew Barnett27 Aug 2020 20:29 UTC
46 points
0 comments4 min readLW link

[Question] Re­sponses to Chris­ti­ano on take­off speeds?

Richard_Ngo30 Oct 2020 15:16 UTC
28 points
7 comments1 min readLW link

[Question] Math­e­mat­i­cal Models of Progress?

abramdemski16 Feb 2021 0:21 UTC
28 points
8 comments2 min readLW link

[Question] What are the mostly likely ways AGI will emerge?

Craig Quiter14 Jul 2020 0:58 UTC
3 points
7 comments1 min readLW link

The abrupt­ness of nu­clear weapons

paulfchristiano25 Feb 2018 17:40 UTC
46 points
35 comments2 min readLW link

Might hu­mans not be the most in­tel­li­gent an­i­mals?

Matthew Barnett23 Dec 2019 21:50 UTC
53 points
41 comments3 min readLW link

[Question] How com­mon is it for one en­tity to have a 3+ year tech­nolog­i­cal lead on its near­est com­peti­tor?

Daniel Kokotajlo17 Nov 2019 15:23 UTC
49 points
20 comments1 min readLW link1 nomination

AGI will dras­ti­cally in­crease economies of scale

Wei_Dai7 Jun 2019 23:17 UTC
46 points
24 comments2 min readLW link1 nomination

In Defense of the Arms Races… that End Arms Races

Gentzel15 Jan 2020 21:30 UTC
38 points
9 comments3 min readLW link
(theconsequentialist.wordpress.com)

Why no to­tal win­ner?

Paul Crowley15 Oct 2017 22:01 UTC
26 points
19 comments2 min readLW link

Sur­prised by Brains

Eliezer Yudkowsky23 Nov 2008 7:26 UTC
47 points
28 comments7 min readLW link

Dou­ble Crux­ing the AI Foom debate

agilecaveman27 Apr 2018 6:46 UTC
17 points
3 comments11 min readLW link

What I Think, If Not Why

Eliezer Yudkowsky11 Dec 2008 17:41 UTC
35 points
103 comments4 min readLW link

Muehlhauser-Go­ertzel Dialogue, Part 1

lukeprog16 Mar 2012 17:12 UTC
42 points
161 comments33 min readLW link

Levels of AI Self-Im­prove­ment

avturchin29 Apr 2018 11:45 UTC
9 points
0 comments39 min readLW link

Su­per­in­tel­li­gence 6: In­tel­li­gence ex­plo­sion kinetics

KatjaGrace21 Oct 2014 1:00 UTC
15 points
68 comments8 min readLW link

What if AI doesn’t quite go FOOM?

Mass_Driver20 Jun 2010 0:03 UTC
16 points
191 comments5 min readLW link

Se­cu­rity Mind­set and Take­off Speeds

DanielFilan27 Oct 2020 3:20 UTC
53 points
23 comments8 min readLW link
(danielfilan.com)

En­gelbart: In­suffi­ciently Recursive

Eliezer Yudkowsky26 Nov 2008 8:31 UTC
17 points
22 comments7 min readLW link

True Sources of Disagreement

Eliezer Yudkowsky8 Dec 2008 15:51 UTC
11 points
53 comments8 min readLW link

Should we post­pone AGI un­til we reach safety?

otto.barten18 Nov 2020 15:43 UTC
23 points
36 comments3 min readLW link

[Question] Poll: Which vari­ables are most strate­gi­cally rele­vant?

22 Jan 2021 17:17 UTC
32 points
34 comments1 min readLW link
No comments.