RSS

In­tel­li­gence explosion

TagLast edit: 19 Feb 2025 21:54 UTC by RobertM

An “intelligence explosion” is what happens if a machine intelligence has fast, consistent returns on investing work into improving its own cognitive powers, over an extended period. This would most stereotypically happen because it became able to optimize its own cognitive software, but could also apply in the case of “invested cognitive power in seizing all the computing power on the Internet” or “invested cognitive power in cracking the protein folding problem and then built nanocomputers”.

A strong version of this idea suggests that once the positive feedback starts to play a role, it will lead to a very dramatic leap in capability very quickly. This is known as a “hard takeoff.” In this scenario, technological progress drops into the characteristic timescale of transistors rather than human neurons, and the ascent rapidly surges upward and creates superintelligence (a mind orders of magnitude more powerful than a human’s) before it hits physical limits. A hard takeoff is distinguished from a “soft takeoff” only by the speed with which said limits are reached.

Published arguments

Philosopher David Chalmers published a significant analysis of the Singularity, focusing on intelligence explosions, in Journal of Consciousness Studies. His analysis of how they could occur defends the likelihood of an intelligence explosion. He performed a very careful analysis of the main premises and arguments for the existence of the a singularity from an intelligence explosion. According to him, the main argument is:”

—————-

He also discusses the nature of general intelligence, and possible obstacles to a singularity. A good deal of discussion is given to the dangers of an intelligence explosion, and Chalmers concludes that we must negotiate it very carefully by building the correct values into the initial AIs.

Luke Muehlhauser and Anna Salamon argue in Intelligence Explosion: Evidence and Import in detail that there is a substantial chance of an intelligence explosion within 100 years, and extremely critical in determining the future. They trace the implications of many types of upcoming technologies, and point out the feedback loops present in them. This leads them to deduce that an above-human level AI will almost certainly lead to an intelligence explosion. They conclude with recommendations for bringing about a safe intelligence explosion.

Hypothetical path

The following is a common example of a possible path for an AI to bring about an intelligence explosion. First, the AI is smart enough to conclude that inventing molecular nanotechnology will be of greatest benefit to it. Its first act of recursive self-improvement is to gain access to other computers over the internet. This extra computational ability increases the depth and breadth of its search processes. It then uses gained knowledge of material physics and a distributed computing program to invent the first general assembler nanomachine. Then it uses some manufacturing technology, accessible from the internet, to build and deploy the nanotech. It programs the nanotech to turn a large section of bedrock into a supercomputer. This is its second act of recursive self-improvement, only possible because of the first. Then it could use this enormous computing power to consider hundreds of alternative decision algorithms, better computing structures and so on. After this, this AI would go from a near to human level intelligence to a superintelligence, providing a dramatic and abruptly increase in capability.

Blog posts

See also

External links

References

Why I’m Scep­ti­cal of Foom

DragonGod8 Dec 2022 10:01 UTC
20 points
36 comments3 min readLW link

Towards a For­mal­i­sa­tion of Re­turns on Cog­ni­tive Rein­vest­ment (Part 1)

DragonGod4 Jun 2022 18:42 UTC
17 points
11 comments13 min readLW link

The Hard In­tel­li­gence Hy­poth­e­sis and Its Bear­ing on Suc­ces­sion In­duced Foom

DragonGod31 May 2022 19:04 UTC
10 points
7 comments4 min readLW link

1960: The Year The Sin­gu­lar­ity Was Cancelled

Scott Alexander23 Apr 2019 1:30 UTC
111 points
15 comments11 min readLW link1 review
(slatestarcodex.com)

New re­port: In­tel­li­gence Ex­plo­sion Microeconomics

Eliezer Yudkowsky29 Apr 2013 23:14 UTC
74 points
246 comments3 min readLW link

Su­per­in­tel­li­gence 6: In­tel­li­gence ex­plo­sion kinetics

KatjaGrace21 Oct 2014 1:00 UTC
15 points
68 comments8 min readLW link

Op­ti­miza­tion and the In­tel­li­gence Explosion

Eliezer Yudkowsky11 Mar 2015 19:00 UTC
69 points
2 comments7 min readLW link

In­tel­li­gence Ex­plo­sion anal­y­sis draft: From digi­tal in­tel­li­gence to in­tel­li­gence explosion

lukeprog26 Nov 2011 6:30 UTC
1 point
5 comments6 min readLW link

Will AI R&D Au­toma­tion Cause a Soft­ware In­tel­li­gence Ex­plo­sion?

26 Mar 2025 18:12 UTC
19 points
3 comments2 min readLW link
(www.forethought.org)

Fac­ing the In­tel­li­gence Ex­plo­sion dis­cus­sion page

lukeprog26 Nov 2011 8:05 UTC
22 points
138 comments1 min readLW link

AGI will be made of het­ero­ge­neous com­po­nents, Trans­former and Selec­tive SSM blocks will be among them

Roman Leventov27 Dec 2023 14:51 UTC
33 points
9 comments4 min readLW link

In­tel­li­gence ex­plo­sion in or­ga­ni­za­tions, or why I’m not wor­ried about the singularity

sbenthall27 Dec 2012 4:32 UTC
13 points
187 comments3 min readLW link

Knowl­edge, Rea­son­ing, and Superintelligence

owencb26 Mar 2025 23:28 UTC
21 points
1 comment7 min readLW link
(strangecities.substack.com)

The Evolu­tion of Hu­mans Was Net-Nega­tive for Hu­man Values

Zack_M_Davis1 Apr 2024 16:01 UTC
38 points
1 comment2 min readLW link

Fac­ing the In­tel­li­gence Explosion

mgin29 Jul 2014 2:16 UTC
6 points
0 comments1 min readLW link

Bet­ter than log­a­r­ith­mic re­turns to rea­son­ing?

Oliver Sourbut30 Jul 2025 0:50 UTC
14 points
5 comments3 min readLW link
(www.oliversourbut.net)

In­tel­li­genceEx­plo­sion.com

lukeprog7 Aug 2011 17:46 UTC
19 points
23 comments1 min readLW link

Ex­is­ten­tial risk from AI with­out an in­tel­li­gence explosion

AlexMennen25 May 2017 16:44 UTC
20 points
23 comments3 min readLW link

[Question] Does the AI con­trol agenda broadly rely on no FOOM be­ing pos­si­ble?

Noosphere8929 Mar 2025 19:38 UTC
22 points
3 comments1 min readLW link

Toward an overview anal­y­sis of in­tel­li­gence explosion

lukeprog13 Nov 2011 22:23 UTC
7 points
15 comments1 min readLW link

Could Dem­ocri­tus have pre­dicted in­tel­li­gence ex­plo­sion?

lukeprog24 Jan 2012 8:40 UTC
7 points
56 comments1 min readLW link

In­tel­li­gence Ex­plo­sion vs. Co-op­er­a­tive Explosion

Kaj_Sotala16 Apr 2012 11:01 UTC
34 points
62 comments16 min readLW link

Carl Shul­man on The Lu­nar So­ciety (7 hour, two-part pod­cast)

ESRogs28 Jun 2023 1:23 UTC
79 points
17 comments1 min readLW link
(www.dwarkeshpatel.com)

Why AGI May not Sig­nifi­cantly Ac­cel­er­ate Chip Manufacturing

John Coleman11 Feb 2026 22:04 UTC
1 point
0 comments10 min readLW link

A method for em­piri­cal back-test­ing of AI’s abil­ity to self-improve

Michael Tontchev21 Mar 2023 20:24 UTC
3 points
0 comments2 min readLW link

How Smart Are Hu­mans?

Joar Skalse2 Jul 2023 15:46 UTC
10 points
19 comments2 min readLW link

In­ter­view with Robert Kral­isch on Simulators

WillPetillo26 Aug 2024 5:49 UTC
17 points
0 comments75 min readLW link

Carl Shul­man On Dwarkesh Pod­cast June 2023

Moonicker11 Feb 2024 21:02 UTC
18 points
0 comments159 min readLW link

What are the differ­ences be­tween a sin­gu­lar­ity, an in­tel­li­gence ex­plo­sion, and a hard take­off?

3 Apr 2025 10:37 UTC
5 points
0 comments2 min readLW link
(aisafety.info)

Creat­ing AGI Safety Interlocks

Koen.Holtman5 Feb 2021 12:01 UTC
7 points
4 comments8 min readLW link

In­tel­li­gence ex­plo­sion: a ra­tio­nal as­sess­ment.

p4rziv4l30 Sep 2024 21:17 UTC
1 point
0 comments1 min readLW link
(docs.google.com)

Coun­ter­fac­tual Plan­ning in AGI Systems

Koen.Holtman3 Feb 2021 13:54 UTC
10 points
0 comments5 min readLW link

The biolog­i­cal in­tel­li­gence explosion

Rob Lucas25 Jul 2021 13:08 UTC
8 points
5 comments4 min readLW link

Why Re­cur­sive Self-Im­prove­ment Might Not Be the Ex­is­ten­tial Risk We Fear

Nassim_A24 Nov 2024 17:17 UTC
1 point
0 comments9 min readLW link

Bits per Joule: A Ther­mo­dy­namic Frame­work for In­tel­li­gence Effi­ciency (and What It Means for Scal­ing Ceilings and FOOM)

Koichi Takahashi6 Feb 2026 2:56 UTC
1 point
0 comments9 min readLW link

What is In­tel­li­gence?

IsaacRosedale23 Apr 2023 6:10 UTC
1 point
0 comments1 min readLW link

Hu­man, All Too Hu­man—Su­per­in­tel­li­gence re­quires learn­ing things we can’t teach

Ben Turtel26 Dec 2024 16:26 UTC
−13 points
4 comments1 min readLW link
(bturtel.substack.com)

Ex­plor­ing a Vi­sion for AI as Com­pas­sion­ate, Emo­tion­ally In­tel­li­gent Part­ners — Seek­ing Col­lab­o­ra­tion and Insights

theophilos14 Jul 2025 23:22 UTC
1 point
0 comments1 min readLW link

A Sim­ple The­ory Of Consciousness

SherlockHolmes8 Aug 2023 18:05 UTC
2 points
5 comments1 min readLW link
(peterholmes.medium.com)

Unal­igned AGI & Brief His­tory of Inequality

ank22 Feb 2025 16:26 UTC
−20 points
4 comments7 min readLW link

Do not miss the cut­off for im­mor­tal­ity! There is a prob­a­bil­ity that you will live for­ever as an im­mor­tal su­per­in­tel­li­gent be­ing and you can in­crease your odds by con­vinc­ing oth­ers to make achiev­ing the tech­nolog­i­cal sin­gu­lar­ity as quickly and safely as pos­si­ble the col­lec­tive goal/​pro­ject of all of hu­man­ity, Similar to “Fable of the Dragon-Tyrant.”

Oliver--Klozoff29 Jun 2023 3:45 UTC
1 point
0 comments28 min readLW link

Sin­gu­lar­ity FAQ

lukeprog19 Apr 2011 17:27 UTC
22 points
35 comments1 min readLW link

A ba­sic math­e­mat­i­cal struc­ture of intelligence

Golol12 Apr 2023 16:49 UTC
4 points
6 comments4 min readLW link

[Question] What is the na­ture of hu­mans gen­eral in­tel­li­gence and it’s im­pli­ca­tions for AGI?

Will_Pearson26 Mar 2024 15:20 UTC
5 points
4 comments1 min readLW link

LLMs May Find It Hard to FOOM

RogerDearnaley15 Nov 2023 2:52 UTC
13 points
30 comments12 min readLW link
No comments.