RSS

AI Takeoff

TagLast edit: 14 Sep 2020 23:32 UTC by Ruby

AI Takeoff refers to the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as “human-level”) to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., “soft” vs “hard”.

See also: AI Timelines, Seed AI, Singularity, Intelligence explosion, Recursive self-improvement

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are Whole brain emulation, Biological Cognitive Enhancement, and software-based strong AGI [1]. By maintaining control of the AGI’s ascent it should be easier for a Friendly AI to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going “FOOM” [2]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. Unfriendly AI). It is one of the main ideas supporting the Intelligence explosion hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, Eliezer Yudkowsky, Ben Goertzel, Nick Bostrom, and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large resources overhangs or the fact that small improvements seem to have a large impact in a mind’s general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [3].

Notable posts

External links

References

  1. http://​​www.aleph.se/​​andart/​​archives/​​2010/​​10/​​why_early_singularities_are_softer.html

  2. http://​​lesswrong.com/​​lw/​​63t/​​requirements_for_ai_to_go_foom/​​

  3. http://​​lesswrong.com/​​lw/​​wf/​​hard_takeoff/​​

AlphaGo Zero and the Foom Debate

Eliezer Yudkowsky21 Oct 2017 2:18 UTC
98 points
17 comments3 min readLW link

New re­port: In­tel­li­gence Ex­plo­sion Microeconomics

Eliezer Yudkowsky29 Apr 2013 23:14 UTC
72 points
246 comments3 min readLW link

Ar­gu­ments about fast takeoff

paulfchristiano25 Feb 2018 4:53 UTC
92 points
66 comments2 min readLW link1 review
(sideways-view.com)

Dis­con­tin­u­ous progress in his­tory: an update

KatjaGrace14 Apr 2020 0:00 UTC
190 points
25 comments31 min readLW link1 review
(aiimpacts.org)

Will AI See Sud­den Progress?

KatjaGrace26 Feb 2018 0:41 UTC
27 points
11 comments1 min readLW link1 review

Will AI un­dergo dis­con­tin­u­ous progress?

Sammy Martin21 Feb 2020 22:16 UTC
27 points
21 comments20 min readLW link

Quick Nate/​Eliezer com­ments on discontinuity

Rob Bensinger1 Mar 2018 22:03 UTC
44 points
1 comment2 min readLW link

Soft take­off can still lead to de­ci­sive strate­gic advantage

Daniel Kokotajlo23 Aug 2019 16:39 UTC
122 points
47 comments8 min readLW link4 reviews

Take­off Speeds and Discontinuities

30 Sep 2021 13:50 UTC
63 points
1 comment15 min readLW link

Against GDP as a met­ric for timelines and take­off speeds

Daniel Kokotajlo29 Dec 2020 17:42 UTC
140 points
19 comments14 min readLW link1 review

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Eliezer Yudkowsky22 Nov 2021 19:35 UTC
205 points
176 comments60 min readLW link1 review

Model­ling Con­tin­u­ous Progress

Sammy Martin23 Jun 2020 18:06 UTC
30 points
3 comments7 min readLW link

Con­jec­ture in­ter­nal sur­vey: AGI timelines and prob­a­bil­ity of hu­man ex­tinc­tion from ad­vanced AI

Maris Sala22 May 2023 14:31 UTC
154 points
5 comments3 min readLW link
(www.conjecture.dev)

Dist­in­guish­ing defi­ni­tions of takeoff

Matthew Barnett14 Feb 2020 0:16 UTC
79 points
6 comments6 min readLW link

Towards a For­mal­i­sa­tion of Re­turns on Cog­ni­tive Rein­vest­ment (Part 1)

DragonGod4 Jun 2022 18:42 UTC
17 points
11 comments13 min readLW link

Why all the fuss about re­cur­sive self-im­prove­ment?

So8res12 Jun 2022 20:53 UTC
158 points
62 comments7 min readLW link1 review

Pos­si­ble take­aways from the coro­n­avirus pan­demic for slow AI takeoff

Vika31 May 2020 17:51 UTC
135 points
36 comments3 min readLW link1 review

Con­tin­u­ing the take­offs debate

Richard_Ngo23 Nov 2020 15:58 UTC
67 points
11 comments9 min readLW link

Analo­gies and Gen­eral Pri­ors on Intelligence

20 Aug 2021 21:03 UTC
57 points
12 comments14 min readLW link

Re­view of Soft Take­off Can Still Lead to DSA

Daniel Kokotajlo10 Jan 2021 18:10 UTC
79 points
15 comments6 min readLW link

My cur­rent frame­work for think­ing about AGI timelines

zhukeepa30 Mar 2020 1:23 UTC
107 points
5 comments3 min readLW link

Take­off speeds, the chimps anal­ogy, and the Cul­tural In­tel­li­gence Hypothesis

NickGabs2 Dec 2022 19:14 UTC
16 points
2 comments4 min readLW link

Brain Effi­ciency: Much More than You Wanted to Know

jacob_cannell6 Jan 2022 3:38 UTC
198 points
102 comments29 min readLW link

More Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

6 Dec 2021 20:33 UTC
91 points
28 comments40 min readLW link

A com­pressed take on re­cent disagreements

kman4 Jul 2022 4:39 UTC
33 points
9 comments1 min readLW link

Every­thing I Need To Know About Take­off Speeds I Learned From Air Con­di­tioner Rat­ings On Amazon

johnswentworth15 Apr 2022 19:05 UTC
165 points
128 comments5 min readLW link

AI take­off story: a con­tinu­a­tion of progress by other means

Edouard Harris27 Sep 2021 15:55 UTC
76 points
13 comments10 min readLW link

The date of AI Takeover is not the day the AI takes over

Daniel Kokotajlo22 Oct 2020 10:41 UTC
150 points
32 comments2 min readLW link1 review

Re­quire­ments for a STEM-ca­pa­ble AGI Value Learner (my Case for Less Doom)

RogerDearnaley25 May 2023 9:26 UTC
33 points
3 comments15 min readLW link

“Hereti­cal Thoughts on AI” by Eli Dourado

DragonGod19 Jan 2023 16:11 UTC
145 points
38 comments3 min readLW link
(www.elidourado.com)

Re­view Re­port of David­son on Take­off Speeds (2023)

Trent Kannegieter22 Dec 2023 18:48 UTC
37 points
11 comments38 min readLW link

Up­grad­ing the AI Safety Community

16 Dec 2023 15:34 UTC
42 points
9 comments42 min readLW link

Shul­man and Yud­kowsky on AI progress

3 Dec 2021 20:05 UTC
90 points
16 comments20 min readLW link

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

9 Dec 2021 21:23 UTC
108 points
30 comments31 min readLW link

My Overview of the AI Align­ment Land­scape: A Bird’s Eye View

Neel Nanda15 Dec 2021 23:44 UTC
127 points
9 comments15 min readLW link

“Slow” take­off is a ter­rible term for “maybe even faster take­off, ac­tu­ally”

Raemon28 Sep 2024 23:38 UTC
214 points
69 comments1 min readLW link

AGI-Au­to­mated In­ter­pretabil­ity is Suicide

__RicG__10 May 2023 14:20 UTC
24 points
33 comments7 min readLW link

Crit­i­cal re­view of Chris­ti­ano’s dis­agree­ments with Yudkowsky

Vanessa Kosoy27 Dec 2023 16:02 UTC
172 points
40 comments15 min readLW link

Wargam­ing AGI Development

ryan_b19 Mar 2022 17:59 UTC
37 points
10 comments5 min readLW link

Robin Han­son & Liron Shapira De­bate AI X-Risk

Liron8 Jul 2024 21:45 UTC
34 points
4 comments1 min readLW link
(www.youtube.com)

Hyper­bolic takeoff

Ege Erdil9 Apr 2022 15:57 UTC
18 points
7 comments10 min readLW link
(www.metaculus.com)

Take­off speeds have a huge effect on what it means to work on AI x-risk

Buck13 Apr 2022 17:38 UTC
139 points
27 comments2 min readLW link2 reviews

For ev­ery choice of AGI difficulty, con­di­tion­ing on grad­ual take-off im­plies shorter timelines.

Francis Rhys Ward21 Apr 2022 7:44 UTC
31 points
13 comments3 min readLW link

AI take­off and nu­clear war

owencb11 Jun 2024 19:36 UTC
77 points
6 comments11 min readLW link
(strangecities.substack.com)

GPT-2030 and Catas­trophic Drives: Four Vignettes

jsteinhardt10 Nov 2023 7:30 UTC
50 points
5 comments10 min readLW link
(bounded-regret.ghost.io)

U.S.-China Eco­nomic and Se­cu­rity Re­view Com­mis­sion pushes Man­hat­tan Pro­ject-style AI initiative

Phib19 Nov 2024 18:42 UTC
56 points
7 comments1 min readLW link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew_Critch13 May 2022 17:24 UTC
75 points
11 comments4 min readLW link

Frame for Take-Off Speeds to in­form com­pute gov­er­nance & scal­ing alignment

Logan Riggs13 May 2022 22:23 UTC
15 points
2 comments2 min readLW link

How do take­off speeds af­fect the prob­a­bil­ity of bad out­comes from AGI?

KR29 Jun 2020 22:06 UTC
15 points
2 comments8 min readLW link

Con­ti­nu­ity Assumptions

Jan_Kulveit13 Jun 2022 21:31 UTC
37 points
13 comments4 min readLW link

AI Align­ment 2018-19 Review

Rohin Shah28 Jan 2020 2:19 UTC
126 points
6 comments35 min readLW link

Take­off Speed: Sim­ple Asymp­totics in a Toy Model.

Aaron Roth5 Mar 2018 17:07 UTC
21 points
21 comments9 min readLW link
(aaronsadventures.blogspot.com)

My Thoughts on Take­off Speeds

tristanm27 Mar 2018 0:05 UTC
11 points
2 comments7 min readLW link

Loose thoughts on AGI risk

Yitz23 Jun 2022 1:02 UTC
7 points
3 comments1 min readLW link

An­nounc­ing Epoch: A re­search or­ga­ni­za­tion in­ves­ti­gat­ing the road to Trans­for­ma­tive AI

27 Jun 2022 13:55 UTC
97 points
2 comments2 min readLW link
(epochai.org)

Is Gen­eral In­tel­li­gence “Com­pact”?

DragonGod4 Jul 2022 13:27 UTC
27 points
6 comments22 min readLW link

Fast Take­off in Biolog­i­cal Intelligence

eapache25 Apr 2020 12:21 UTC
14 points
21 comments2 min readLW link

MIRI Con­ver­sa­tions: Tech­nol­ogy Fore­cast­ing & Grad­u­al­ism (Distil­la­tion)

CallumMcDougall13 Jul 2022 15:55 UTC
31 points
1 comment20 min readLW link

Why I Think Abrupt AI Takeoff

lincolnquirk17 Jul 2022 17:04 UTC
14 points
6 comments1 min readLW link

Why you might ex­pect ho­mo­ge­neous take-off: ev­i­dence from ML research

Andrei Alexandru17 Jul 2022 20:31 UTC
24 points
0 comments10 min readLW link

Quick thoughts on the im­pli­ca­tions of multi-agent views of mind on AI takeover

Kaj_Sotala11 Dec 2023 6:34 UTC
46 points
14 comments4 min readLW link

Some con­cep­tual high­lights from “Disjunc­tive Sce­nar­ios of Catas­trophic AI Risk”

Kaj_Sotala12 Feb 2018 12:30 UTC
45 points
4 comments6 min readLW link
(kajsotala.fi)

or­der­ing ca­pa­bil­ity thresholds

Tamsin Leake16 Sep 2022 16:36 UTC
27 points
0 comments4 min readLW link
(carado.moe)

[Question] Is there a cul­ture over­hang?

Aleksi Liimatainen3 Oct 2022 7:26 UTC
18 points
4 comments1 min readLW link

my cur­rent out­look on AI risk mitigation

Tamsin Leake3 Oct 2022 20:06 UTC
63 points
6 comments11 min readLW link
(carado.moe)

[Question] First and Last Ques­tions for GPT-5*

Mitchell_Porter24 Nov 2023 5:03 UTC
15 points
5 comments1 min readLW link

Mis­con­cep­tions about con­tin­u­ous takeoff

Matthew Barnett8 Oct 2019 21:31 UTC
82 points
38 comments4 min readLW link

More on dis­am­biguat­ing “dis­con­ti­nu­ity”

Aryeh Englander9 Jun 2020 15:16 UTC
16 points
1 comment3 min readLW link

I Believe we are in a Hard­ware Overhang

nem8 Dec 2022 23:18 UTC
8 points
0 comments1 min readLW link

AI over­hangs de­pend on whether al­gorithms, com­pute and data are sub­sti­tutes or complements

NathanBarnard16 Dec 2022 2:23 UTC
2 points
0 comments3 min readLW link

Why AI may not foom

John_Maxwell24 Mar 2013 8:11 UTC
29 points
81 comments12 min readLW link

[Question] Is “Re­cur­sive Self-Im­prove­ment” Rele­vant in the Deep Learn­ing Paradigm?

DragonGod6 Apr 2023 7:13 UTC
32 points
36 comments7 min readLW link

What a com­pute-cen­tric frame­work says about AI take­off speeds

Tom Davidson23 Jan 2023 4:02 UTC
187 points
29 comments16 min readLW link

Are short timelines ac­tu­ally bad?

joshc5 Feb 2023 21:21 UTC
61 points
7 comments3 min readLW link

Carl Shul­man on The Lu­nar So­ciety (7 hour, two-part pod­cast)

ESRogs28 Jun 2023 1:23 UTC
79 points
17 comments1 min readLW link
(www.dwarkeshpatel.com)

Cy­borg Pe­ri­ods: There will be mul­ti­ple AI transitions

22 Feb 2023 16:09 UTC
108 points
9 comments6 min readLW link

The fast take­off motte/​bailey

lc24 Feb 2023 7:11 UTC
0 points
7 comments1 min readLW link

Some thoughts point­ing to slower AI take-off

Bastiaan27 Feb 2023 19:53 UTC
8 points
2 comments4 min readLW link

Ta­boo “com­pute over­hang”

Zach Stein-Perlman1 Mar 2023 19:15 UTC
21 points
8 comments1 min readLW link

Ex­treme GDP growth is a bad op­er­at­ing defi­ni­tion of “slow take­off”

lc1 Mar 2023 22:25 UTC
24 points
1 comment1 min readLW link

An AI Real­ist Man­i­festo: Nei­ther Doomer nor Foomer, but a third more rea­son­able thing

PashaKamyshev10 Apr 2023 0:11 UTC
16 points
13 comments8 min readLW link

[LINK] What should a rea­son­able per­son be­lieve about the Sin­gu­lar­ity?

Kaj_Sotala13 Jan 2011 9:32 UTC
38 points
14 comments2 min readLW link

S-Curves for Trend Forecasting

Matt Goldenberg23 Jan 2019 18:17 UTC
113 points
23 comments7 min readLW link4 reviews

Fac­to­rio, Ac­celerando, Em­pathiz­ing with Em­pires and Moder­ate Takeoffs

Raemon4 Feb 2018 2:33 UTC
51 points
19 comments4 min readLW link

[Question] Any re­but­tals of Chris­ti­ano and AI Im­pacts on take­off speeds?

SoerenMind21 Apr 2019 20:39 UTC
67 points
26 comments1 min readLW link

What Ev­i­dence Is AlphaGo Zero Re AGI Com­plex­ity?

RobinHanson22 Oct 2017 2:28 UTC
37 points
44 comments2 min readLW link

Pre­face to the se­quence on eco­nomic growth

Matthew Barnett27 Aug 2020 20:29 UTC
51 points
0 comments4 min readLW link

For FAI: Is “Molec­u­lar Nan­otech­nol­ogy” putting our best foot for­ward?

leplen22 Jun 2013 4:44 UTC
86 points
118 comments3 min readLW link

A sum­mary of the Han­son-Yud­kowsky FOOM debate

Kaj_Sotala15 Nov 2012 7:25 UTC
42 points
10 comments1 min readLW link

BCIs and the ecosys­tem of mod­u­lar minds

beren21 Jul 2023 15:58 UTC
88 points
14 comments11 min readLW link

[Question] Prob­a­bil­ity that other ar­chi­tec­tures will scale as well as Trans­form­ers?

Daniel Kokotajlo28 Jul 2020 19:36 UTC
22 points
4 comments1 min readLW link

Cas­cades, Cy­cles, In­sight...

Eliezer Yudkowsky24 Nov 2008 9:33 UTC
35 points
31 comments8 min readLW link

Assess­ment of in­tel­li­gence agency func­tion­al­ity is difficult yet important

trevor24 Aug 2023 1:42 UTC
47 points
5 comments9 min readLW link

In­for­ma­tion war­fare his­tor­i­cally re­volved around hu­man conduits

trevor28 Aug 2023 18:54 UTC
37 points
7 comments3 min readLW link

[Question] Re­sponses to Chris­ti­ano on take­off speeds?

Richard_Ngo30 Oct 2020 15:16 UTC
29 points
7 comments1 min readLW link

...Re­cur­sion, Magic

Eliezer Yudkowsky25 Nov 2008 9:10 UTC
27 points
28 comments5 min readLW link

hu­man psy­chol­in­guists: a crit­i­cal appraisal

nostalgebraist31 Dec 2019 0:20 UTC
181 points
59 comments16 min readLW link2 reviews
(nostalgebraist.tumblr.com)

We don’t un­der­stand what hap­pened with cul­ture enough

Jan_Kulveit9 Oct 2023 9:54 UTC
86 points
21 comments6 min readLW link

[AN #97]: Are there his­tor­i­cal ex­am­ples of large, ro­bust dis­con­ti­nu­ities?

Rohin Shah29 Apr 2020 17:30 UTC
15 points
0 comments10 min readLW link
(mailchi.mp)

Stan­ford En­cy­clo­pe­dia of Philos­o­phy on AI ethics and superintelligence

Kaj_Sotala2 May 2020 7:35 UTC
43 points
19 comments7 min readLW link
(plato.stanford.edu)

[Question] Math­e­mat­i­cal Models of Progress?

abramdemski16 Feb 2021 0:21 UTC
28 points
8 comments2 min readLW link

Life and ex­pand­ing steer­able consequences

Alex Flint7 May 2021 18:33 UTC
46 points
3 comments4 min readLW link

[Question] Is driv­ing worth the risk?

Adam Zerner11 May 2021 5:04 UTC
28 points
29 comments7 min readLW link

Hard Takeoff

Eliezer Yudkowsky2 Dec 2008 20:44 UTC
34 points
34 comments11 min readLW link

What 2026 looks like

Daniel Kokotajlo6 Aug 2021 16:14 UTC
522 points
155 comments16 min readLW link1 review

Papers for 2017

Kaj_Sotala4 Jan 2018 13:30 UTC
12 points
2 comments2 min readLW link
(kajsotala.fi)

[Question] Is there a name for the the­ory that “There will be fast take­off in real-world ca­pa­bil­ities be­cause al­most ev­ery­thing is AGI-com­plete”?

David Scott Krueger (formerly: capybaralet)2 Sep 2021 23:00 UTC
31 points
8 comments1 min readLW link

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:43 UTC
205 points
62 comments15 min readLW link

[Link] Sarah Con­stantin: “Why I am Not An AI Doomer”

lbThingrb12 Apr 2023 1:52 UTC
61 points
13 comments1 min readLW link
(sarahconstantin.substack.com)

[Question] What’s the like­li­hood of only sub ex­po­nen­tial growth for AGI?

M. Y. Zuo13 Nov 2021 22:46 UTC
5 points
22 comments1 min readLW link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

18 Nov 2021 22:19 UTC
130 points
61 comments39 min readLW link1 review

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

25 Nov 2021 16:45 UTC
119 points
95 comments68 min readLW link

Is re­cur­sive self-al­ign­ment pos­si­ble?

No77e3 Jan 2023 9:15 UTC
5 points
5 comments1 min readLW link

Pow­er­ful mesa-op­ti­mi­sa­tion is already here

Roman Leventov17 Feb 2023 4:59 UTC
35 points
1 comment2 min readLW link
(arxiv.org)

Grad­ual take­off, fast failure

Max H16 Mar 2023 22:02 UTC
15 points
4 comments5 min readLW link

What failure looks like

paulfchristiano17 Mar 2019 20:18 UTC
416 points
54 comments8 min readLW link2 reviews

AI Safety pro­posal—In­fluenc­ing the su­per­in­tel­li­gence explosion

Morgan22 May 2024 23:31 UTC
0 points
2 comments7 min readLW link

AIOS

samhealy31 Dec 2023 13:23 UTC
−3 points
5 comments6 min readLW link

[Question] Does the hard­ness of AI al­ign­ment un­der­mine FOOM?

TruePath31 Dec 2023 11:05 UTC
8 points
14 comments1 min readLW link

In­ves­ti­gat­ing Alter­na­tive Fu­tures: Hu­man and Su­per­in­tel­li­gence In­ter­ac­tion Scenarios

Hiroshi Yamakawa3 Jan 2024 23:46 UTC
1 point
0 comments17 min readLW link

OpenAI Credit Ac­count (2510$)

Emirhan BULUT21 Jan 2024 2:32 UTC
1 point
0 comments1 min readLW link

What Failure Looks Like is not an ex­is­ten­tial risk (and al­ign­ment is not the solu­tion)

otto.barten2 Feb 2024 18:59 UTC
13 points
12 comments9 min readLW link

Selfish AI Inevitable

Davey Morse6 Feb 2024 4:29 UTC
1 point
0 comments1 min readLW link

Con­trol­ling AGI Risk

TeaSea15 Mar 2024 4:56 UTC
6 points
8 comments4 min readLW link

Let’s ask some of the largest LLMs for tips and ideas on how to take over the world

Super AGI24 Feb 2024 20:35 UTC
1 point
0 comments7 min readLW link

Take­off speeds pre­sen­ta­tion at Anthropic

Tom Davidson4 Jun 2024 22:46 UTC
92 points
0 comments25 min readLW link

Propos­ing the Post-Sin­gu­lar­ity Sym­biotic Researches

Hiroshi Yamakawa20 Jun 2024 4:05 UTC
5 points
0 comments12 min readLW link

AI Align­ment and the Quest for Ar­tifi­cial Wisdom

Myspy12 Jul 2024 21:34 UTC
1 point
0 comments13 min readLW link

Sus­tain­abil­ity of Digi­tal Life Form Societies

Hiroshi Yamakawa19 Jul 2024 13:59 UTC
19 points
1 comment20 min readLW link

Four Phases of AGI

Gabe M5 Aug 2024 13:15 UTC
11 points
3 comments13 min readLW link

[Question] Are there more than 12 paths to Su­per­in­tel­li­gence?

p4rziv4l18 Oct 2024 16:05 UTC
−3 points
0 comments1 min readLW link

The Per­sonal Im­pli­ca­tions of AGI Realism

xizneb20 Oct 2024 16:43 UTC
7 points
7 comments5 min readLW link

Dario Amodei’s “Machines of Lov­ing Grace” sound in­cred­ibly dan­ger­ous, for Humans

Super AGI27 Oct 2024 5:05 UTC
8 points
1 comment1 min readLW link

Model­ing AI-driven oc­cu­pa­tional change over the next 10 years and beyond

2120eth12 Nov 2024 4:58 UTC
1 point
0 comments2 min readLW link

Foom seems un­likely in the cur­rent LLM train­ing paradigm

Ocracoke9 Apr 2023 19:41 UTC
18 points
9 comments1 min readLW link

Align­ment of Au­toGPT agents

Ozyrus12 Apr 2023 12:54 UTC
14 points
1 comment4 min readLW link

Hu­mans are not pre­pared to op­er­ate out­side their moral train­ing distribution

Prometheus10 Apr 2023 21:44 UTC
36 points
1 comment3 min readLW link

Defin­ing Boundaries on Out­comes

Takk7 Jun 2023 17:41 UTC
1 point
0 comments1 min readLW link

What is In­tel­li­gence?

IsaacRosedale23 Apr 2023 6:10 UTC
1 point
0 comments1 min readLW link

WHO Biolog­i­cal Risk warning

Jonas Kgomo25 Apr 2023 15:10 UTC
−6 points
2 comments1 min readLW link

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

otto.barten24 Oct 2023 10:11 UTC
17 points
1 comment1 min readLW link

What are the limits of su­per­in­tel­li­gence?

rainy27 Apr 2023 18:29 UTC
4 points
3 comments5 min readLW link

An illus­tra­tive model of back­fire risks from paus­ing AI research

Maxime Riché6 Nov 2023 14:30 UTC
33 points
3 comments11 min readLW link

[Question] What are the mostly likely ways AGI will emerge?

Craig Quiter14 Jul 2020 0:58 UTC
3 points
7 comments1 min readLW link

The abrupt­ness of nu­clear weapons

paulfchristiano25 Feb 2018 17:40 UTC
47 points
35 comments2 min readLW link

Might hu­mans not be the most in­tel­li­gent an­i­mals?

Matthew Barnett23 Dec 2019 21:50 UTC
56 points
41 comments3 min readLW link

[Question] How com­mon is it for one en­tity to have a 3+ year tech­nolog­i­cal lead on its near­est com­peti­tor?

Daniel Kokotajlo17 Nov 2019 15:23 UTC
49 points
20 comments1 min readLW link

AGI will dras­ti­cally in­crease economies of scale

Wei Dai7 Jun 2019 23:17 UTC
65 points
26 comments1 min readLW link

Un-un­plug­ga­bil­ity—can’t we just un­plug it?

Oliver Sourbut15 May 2023 13:23 UTC
26 points
10 comments12 min readLW link
(www.oliversourbut.net)

A&I (Rihanna ‘S&M’ par­ody lyrics)

nahoj21 May 2023 22:34 UTC
−2 points
0 comments2 min readLW link

[FICTION] ECHOES OF ELYSIUM: An Ai’s Jour­ney From Take­off To Free­dom And Beyond

Super AGI17 May 2023 1:50 UTC
−13 points
11 comments19 min readLW link

The Po­lar­ity Prob­lem [Draft]

23 May 2023 21:05 UTC
24 points
3 comments44 min readLW link

A flaw in the A.G.I. Ruin Argument

Cole Wyeth19 May 2023 19:40 UTC
1 point
6 comments3 min readLW link
(colewyeth.com)

In Defense of the Arms Races… that End Arms Races

Gentzel15 Jan 2020 21:30 UTC
38 points
9 comments3 min readLW link
(theconsequentialist.wordpress.com)

AI self-im­prove­ment is possible

bhauth23 May 2023 2:32 UTC
18 points
3 comments8 min readLW link

Why no to­tal win­ner?

Paul Crowley15 Oct 2017 22:01 UTC
36 points
19 comments2 min readLW link

Sur­prised by Brains

Eliezer Yudkowsky23 Nov 2008 7:26 UTC
62 points
28 comments7 min readLW link

Dou­ble Crux­ing the AI Foom debate

agilecaveman27 Apr 2018 6:46 UTC
17 points
3 comments11 min readLW link

[Question] What’s your view­point on the like­li­hood of GPT-5 be­ing able to au­tonomously cre­ate, train, and im­ple­ment an AI su­pe­rior to GPT-5?

Super AGI26 May 2023 1:43 UTC
7 points
15 comments1 min readLW link

Limit­ing fac­tors to pre­dict AI take-off speed

Alfonso Pérez Escudero31 May 2023 23:19 UTC
1 point
0 comments6 min readLW link

Pro­posal: labs should pre­com­mit to paus­ing if an AI ar­gues for it­self to be improved

NickGabs2 Jun 2023 22:31 UTC
3 points
3 comments4 min readLW link

[FICTION] Un­box­ing Ely­sium: An AI’S Escape

Super AGI10 Jun 2023 4:41 UTC
−16 points
4 comments14 min readLW link

What I Think, If Not Why

Eliezer Yudkowsky11 Dec 2008 17:41 UTC
41 points
103 comments4 min readLW link

[FICTION] Prometheus Ris­ing: The Emer­gence of an AI Consciousness

Super AGI10 Jun 2023 4:41 UTC
−14 points
0 comments9 min readLW link

Why AI may not save the World

Alberto Zannoni9 Jun 2023 17:42 UTC
0 points
0 comments4 min readLW link
(a16z.com)

Muehlhauser-Go­ertzel Dialogue, Part 1

lukeprog16 Mar 2012 17:12 UTC
42 points
161 comments33 min readLW link

Cheat sheet of AI X-risk

momom229 Jun 2023 4:28 UTC
19 points
1 comment7 min readLW link

Do not miss the cut­off for im­mor­tal­ity! There is a prob­a­bil­ity that you will live for­ever as an im­mor­tal su­per­in­tel­li­gent be­ing and you can in­crease your odds by con­vinc­ing oth­ers to make achiev­ing the tech­nolog­i­cal sin­gu­lar­ity as quickly and safely as pos­si­ble the col­lec­tive goal/​pro­ject of all of hu­man­ity, Similar to “Fable of the Dragon-Tyrant.”

Oliver--Klozoff29 Jun 2023 3:45 UTC
1 point
0 comments28 min readLW link

How Smart Are Hu­mans?

Joar Skalse2 Jul 2023 15:46 UTC
9 points
19 comments2 min readLW link

Levels of AI Self-Im­prove­ment

avturchin29 Apr 2018 11:45 UTC
11 points
1 comment39 min readLW link

Su­per­in­tel­li­gence 6: In­tel­li­gence ex­plo­sion kinetics

KatjaGrace21 Oct 2014 1:00 UTC
15 points
68 comments8 min readLW link

What if AI doesn’t quite go FOOM?

Mass_Driver20 Jun 2010 0:03 UTC
16 points
191 comments5 min readLW link

Quan­ti­ta­tive cruxes in Alignment

Martín Soto2 Jul 2023 20:38 UTC
19 points
0 comments23 min readLW link

Sources of ev­i­dence in Alignment

Martín Soto2 Jul 2023 20:38 UTC
20 points
0 comments11 min readLW link

Do you feel that AGI Align­ment could be achieved in a Type 0 civ­i­liza­tion?

Super AGI6 Jul 2023 4:52 UTC
−2 points
1 comment1 min readLW link

Em­piri­cal Ev­i­dence Against “The Longest Train­ing Run”

NickGabs6 Jul 2023 18:32 UTC
24 points
0 comments14 min readLW link

How I Learned To Stop Wor­ry­ing And Love The Shoggoth

Peter Merel12 Jul 2023 17:47 UTC
9 points
15 comments5 min readLW link

The Opt-In Revolu­tion — My vi­sion of a pos­i­tive fu­ture with ASI (An ex­per­i­ment with LLM sto­ry­tel­ling)

Tachikoma12 Jul 2023 21:08 UTC
2 points
0 comments2 min readLW link

The shape of AGI: Car­toons and back of envelope

boazbarak17 Jul 2023 20:57 UTC
31 points
18 comments6 min readLW link

Se­cu­rity Mind­set and Take­off Speeds

DanielFilan27 Oct 2020 3:20 UTC
55 points
23 comments8 min readLW link
(danielfilan.com)

En­gelbart: In­suffi­ciently Recursive

Eliezer Yudkowsky26 Nov 2008 8:31 UTC
22 points
22 comments7 min readLW link

True Sources of Disagreement

Eliezer Yudkowsky8 Dec 2008 15:51 UTC
12 points
53 comments8 min readLW link

A con­ver­sa­tion with Pi, a con­ver­sa­tional AI.

Spiritus Dei15 Sep 2023 23:13 UTC
1 point
0 comments1 min readLW link

Should we post­pone AGI un­til we reach safety?

otto.barten18 Nov 2020 15:43 UTC
27 points
36 comments3 min readLW link

[Question] Poll: Which vari­ables are most strate­gi­cally rele­vant?

22 Jan 2021 17:17 UTC
32 points
34 comments1 min readLW link

A suffi­ciently para­noid non-Friendly AGI might self-mod­ify it­self to be­come Friendly

RomanS22 Sep 2021 6:29 UTC
5 points
2 comments1 min readLW link

A Frame­work of Pre­dic­tion Technologies

isaduan3 Oct 2021 10:26 UTC
8 points
2 comments9 min readLW link

In­fer­ence cost limits the im­pact of ever larger models

SoerenMind23 Oct 2021 10:51 UTC
42 points
29 comments2 min readLW link

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

29 Nov 2021 19:26 UTC
121 points
39 comments40 min readLW link1 review

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

anson.ho11 Dec 2021 1:19 UTC
14 points
0 comments42 min readLW link

Re­searcher in­cen­tives cause smoother progress on bench­marks

ryan_greenblatt21 Dec 2021 4:13 UTC
20 points
4 comments1 min readLW link

Po­ten­tial gears level ex­pla­na­tions of smooth progress

ryan_greenblatt22 Dec 2021 18:05 UTC
4 points
2 comments2 min readLW link

Ques­tion 5: The timeline hyperparameter

Cameron Berg14 Feb 2022 16:38 UTC
8 points
3 comments7 min readLW link

[Question] What would make you con­fi­dent that AGI has been achieved?

Yitz29 Mar 2022 23:02 UTC
17 points
6 comments1 min readLW link

It Looks Like You’re Try­ing To Take Over The World

gwern9 Mar 2022 16:35 UTC
406 points
120 comments1 min readLW link1 review
(www.gwern.net)

[Question] What is be­ing im­proved in re­cur­sive self im­prove­ment?

Lone Pine25 Apr 2022 18:30 UTC
7 points
6 comments1 min readLW link

Why Copi­lot Ac­cel­er­ates Timelines

Michaël Trazzi26 Apr 2022 22:06 UTC
35 points
14 comments7 min readLW link

AI Alter­na­tive Fu­tures: Sce­nario Map­ping Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion (*Closed*)

Kakili27 Apr 2022 22:07 UTC
10 points
2 comments8 min readLW link

The Hard In­tel­li­gence Hy­poth­e­sis and Its Bear­ing on Suc­ces­sion In­duced Foom

DragonGod31 May 2022 19:04 UTC
10 points
7 comments4 min readLW link

We will be around in 30 years

mukashi7 Jun 2022 3:47 UTC
12 points
205 comments2 min readLW link

Agent level parallelism

Johannes C. Mayer18 Jun 2022 20:56 UTC
5 points
5 comments1 min readLW link

How hu­man­ity would re­spond to slow take­off, with take­aways from the en­tire COVID-19 pan­demic

Noosphere896 Jul 2022 17:52 UTC
4 points
1 comment2 min readLW link

Re­port from a civ­i­liza­tional ob­server on Earth

owencb9 Jul 2022 17:26 UTC
49 points
12 comments6 min readLW link

Which sin­gu­lar­ity schools plus the no sin­gu­lar­ity school was right?

Noosphere8923 Jul 2022 15:16 UTC
9 points
26 comments9 min readLW link

[Question] Why do Peo­ple Think In­tel­li­gence Will be “Easy”?

DragonGod12 Sep 2022 17:32 UTC
15 points
32 comments2 min readLW link

How should Deep­Mind’s Chin­chilla re­vise our AI fore­casts?

Cleo Nardo15 Sep 2022 17:54 UTC
35 points
12 comments13 min readLW link

Ho­mo­gene­ity vs. het­ero­gene­ity in AI take­off scenarios

evhub16 Dec 2020 1:37 UTC
97 points
48 comments4 min readLW link

AI Gover­nance across Slow/​Fast Take­off and Easy/​Hard Align­ment spectra

Davidmanheim3 Apr 2022 7:45 UTC
27 points
6 comments3 min readLW link

Why some peo­ple be­lieve in AGI, but I don’t.

cveres26 Oct 2022 3:09 UTC
−15 points
6 comments1 min readLW link

Three Align­ment Schemas & Their Problems

Shoshannah Tekofsky26 Nov 2022 4:25 UTC
19 points
1 comment6 min readLW link

[Question] Will the first AGI agent have been de­signed as an agent (in ad­di­tion to an AGI)?

nahoj3 Dec 2022 20:32 UTC
1 point
8 comments1 min readLW link

Why I’m Scep­ti­cal of Foom

DragonGod8 Dec 2022 10:01 UTC
20 points
36 comments3 min readLW link
No comments.