RSS

Solomonoff induction

TagLast edit: 19 Feb 2025 22:33 UTC by RobertM

Solomonoff induction is an ideal answer to questions like “What probably comes next in the sequence 1, 1, 2, 3, 5, 8?” or “Given the last three years of visual data from this webcam, what will this robot probably see next?” or “Will the sun rise tomorrow?” Solomonoff induction requires infinite computing power, and is defined by taking every computable algorithm for giving a probability distribution over future data given past data, weighted by their algorithmic simplicity, and updating those weights by comparison to the actual data.

E.g., somewhere in the ideal Solomonoff distribution is an exact copy of you, right now, staring at a string of 1s and 0s and trying to predict what comes next—though this copy of you starts out with a very low weight in the mixture owing to its complexity. Since a copy of you is present in this mixture of computable predictors, we can prove a theorem about how well Solomonoff induction does compared to an exact copy of you; namely, Solomonoff induction commits only a bounded amount of error relative to you, or any other computable way of making predictions. Solomonoff induction is thus a kind of perfect or rational ideal for probabilistically predicting sequences, although it cannot be implemented in reality due to requiring infinite computing power. Still, considering Solomonoff induction can give us important insights into how non-ideal reasoning should operate in the real world.

Additional reading:

The Solomonoff Prior is Malign

Mark Xu14 Oct 2020 1:33 UTC
180 points
53 comments16 min readLW link3 reviews

An In­tu­itive Ex­pla­na­tion of Solomonoff Induction

Alex_Altair11 Jul 2012 8:05 UTC
172 points
231 comments24 min readLW link

A Semitech­ni­cal In­tro­duc­tory Dialogue on Solomonoff Induction

Eliezer Yudkowsky4 Mar 2021 17:27 UTC
146 points
32 comments54 min readLW link

Open Prob­lems Re­lated to Solomonoff Induction

Wei Dai6 Jun 2012 0:26 UTC
56 points
105 comments2 min readLW link

A Tech­ni­cal In­tro­duc­tion to Solomonoff In­duc­tion with­out K-Complexity

Leon Lang26 Nov 2025 21:36 UTC
76 points
20 comments25 min readLW link

Solomonoff in­duc­tion still works if the uni­verse is un­com­putable, and its use­ful­ness doesn’t re­quire know­ing Oc­cam’s razor

Christopher King18 Jun 2023 1:52 UTC
39 points
28 comments4 min readLW link

The Prob­lem of the Criterion

Gordon Seidoh Worley21 Jan 2021 15:05 UTC
57 points
63 comments10 min readLW link

When does ra­tio­nal­ity-as-search have non­triv­ial im­pli­ca­tions?

nostalgebraist4 Nov 2018 22:42 UTC
72 points
12 comments3 min readLW link

[Question] How is Solomonoff in­duc­tion calcu­lated in prac­tice?

Bucky4 Jun 2019 10:11 UTC
33 points
13 comments1 min readLW link

Asymp­totic Log­i­cal Uncer­tainty: Con­crete Failure of the Solomonoff Approach

Scott Garrabrant22 Jul 2015 19:27 UTC
13 points
0 comments1 min readLW link

A po­ten­tial prob­lem with us­ing Solomonoff in­duc­tion as a prior

JoshuaZ7 Apr 2011 19:27 UTC
18 points
18 comments1 min readLW link

Pas­cal’s Mug­ging: Tiny Prob­a­bil­ities of Vast Utilities

Eliezer Yudkowsky19 Oct 2007 23:37 UTC
119 points
355 comments4 min readLW link

Reflec­tive AIXI and Anthropics

Diffractor24 Sep 2018 2:15 UTC
18 points
14 comments8 min readLW link

Ex­per­i­men­tal Ev­i­dence for Si­mu­la­tor The­ory— Part 2: The Scalers Strike Back

RogerDearnaley23 Mar 2026 22:37 UTC
21 points
0 comments34 min readLW link

Clar­ify­ing The Mal­ig­nity of the Univer­sal Prior: The Lex­i­cal Update

interstice15 Jan 2020 0:00 UTC
20 points
2 comments3 min readLW link

Re­think­ing Laplace’s Rule of Succession

Cleo Nardo22 Nov 2024 18:46 UTC
13 points
5 comments2 min readLW link

Clar­ify­ing Con­se­quen­tial­ists in the Solomonoff Prior

Vlad Mikulik11 Jul 2018 2:35 UTC
20 points
16 comments6 min readLW link

Solomonoff Cartesianism

Rob Bensinger2 Mar 2014 17:56 UTC
52 points
51 comments25 min readLW link

Oc­cam’s Ra­zor and the Univer­sal Prior

Peter Chatain3 Oct 2021 3:23 UTC
29 points
5 comments21 min readLW link

K-com­plex­ity is silly; use cross-en­tropy instead

So8res20 Dec 2022 23:06 UTC
153 points
60 comments14 min readLW link2 reviews

Lambda Calcu­lus Prior

abramdemski14 Nov 2025 21:29 UTC
25 points
3 comments4 min readLW link

An ad­di­tional prob­lem with Solomonoff induction

gedymin22 Jan 2014 23:34 UTC
3 points
51 comments4 min readLW link

Why you can’t treat de­cid­abil­ity and com­plex­ity as a con­stant (Post #1)

Noosphere8926 Jul 2023 17:54 UTC
6 points
13 comments5 min readLW link

[Question] Ques­tions about Solomonoff induction

mukashi10 Jan 2024 1:16 UTC
7 points
11 comments1 min readLW link

Deep Learn­ing is cheap Solomonoff in­duc­tion?

7 Dec 2024 11:00 UTC
46 points
1 comment17 min readLW link

Chang­ing my mind about Chris­ti­ano’s ma­lign prior argument

Cole Wyeth4 Apr 2025 0:54 UTC
37 points
34 comments7 min readLW link

Ex­per­i­men­tal Ev­i­dence for Si­mu­la­tor The­ory— Part 1: Emer­gent Misal­ign­ment and Weird Generalizations

RogerDearnaley23 Mar 2026 22:37 UTC
25 points
0 comments53 min readLW link

Mul­ti­ple Wor­lds, One Univer­sal Wave Function

evhub4 Nov 2020 22:28 UTC
61 points
76 comments61 min readLW link

Are the fun­da­men­tal phys­i­cal con­stants com­putable?

Yair Halberstadt5 Apr 2022 15:05 UTC
15 points
6 comments2 min readLW link

Sleep­ing Ex­perts in the (re­flec­tive) Solomonoff Prior

31 Aug 2025 4:55 UTC
16 points
0 comments3 min readLW link

An N=1 ob­ser­va­tional study on in­ter­pretabil­ity of Nat­u­ral Gen­eral In­tel­li­gence (NGI)

dr_s27 Sep 2025 9:28 UTC
12 points
3 comments6 min readLW link

Math­e­mat­i­cal In­con­sis­tency in Solomonoff In­duc­tion?

Elliot Temple25 Aug 2020 17:09 UTC
7 points
15 comments2 min readLW link

What is the ad­van­tage of the Kol­mogorov com­plex­ity prior?

skepsci16 Feb 2012 1:51 UTC
18 points
29 comments2 min readLW link

From SLT to AIT: NN gen­er­al­i­sa­tion out-of-distribution

Lucius Bushnaq4 Sep 2025 15:20 UTC
114 points
8 comments14 min readLW link

Re­marks 1–18 on GPT (com­pressed)

Cleo Nardo20 Mar 2023 22:27 UTC
147 points
35 comments31 min readLW link

Solomonoff In­duc­tion ex­plained via di­a­log.

panickedapricott21 Sep 2017 5:27 UTC
3 points
0 comments1 min readLW link
(arbital.com)

My im­pres­sion of sin­gu­lar learn­ing theory

Ege Erdil18 Jun 2023 15:34 UTC
52 points
30 comments2 min readLW link

From the “weird math ques­tions” de­part­ment...

CronoDAS9 Aug 2012 7:19 UTC
7 points
50 comments1 min readLW link

Com­pu­ta­tional Model: Causal Di­a­grams with Symmetry

johnswentworth22 Aug 2019 17:54 UTC
53 points
31 comments4 min readLW link

Proof idea: SLT to AIT

Lucius Bushnaq10 Feb 2025 23:14 UTC
42 points
15 comments6 min readLW link

[Question] Is the hu­man brain a valid choice for the Univer­sal Tur­ing Ma­chine in Solomonoff In­duc­tion?

habryka8 Dec 2018 1:49 UTC
22 points
13 comments1 min readLW link

Solomonoff In­duc­tion and Sleep­ing Beauty

ike17 Nov 2020 2:28 UTC
7 points
0 comments2 min readLW link

Limited agents need ap­prox­i­mate induction

Manfred24 Apr 2015 7:42 UTC
16 points
10 comments8 min readLW link

An at­tempt to break cir­cu­lar­ity in science

bilibili15 Jul 2022 18:32 UTC
3 points
5 comments1 min readLW link

Help me un­der­stand: how do mul­ti­verse acausal trades work?

Aram Ebtekar1 Sep 2025 3:25 UTC
46 points
26 comments2 min readLW link

Pre­dic­tion can be Outer Aligned at Optimum

Lukas Finnveden10 Jan 2021 18:48 UTC
15 points
12 comments11 min readLW link

How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

Christopher King11 Jul 2023 19:27 UTC
17 points
11 comments2 min readLW link

Solomonoff’s solip­sism

Mergimio H. Doefevmil8 May 2023 6:55 UTC
−13 points
9 comments1 min readLW link

[Question] Gen­er­al­iza­tion of the Solomonoff In­duc­tion to Ac­cu­racy—Is it pos­si­ble? Would it be use­ful?

PeterL20 Feb 2022 19:29 UTC
2 points
1 comment1 min readLW link

The Example

Valerii K.19 Jan 2026 15:27 UTC
10 points
0 comments10 min readLW link

In­tu­itive Ex­pla­na­tion of Solomonoff Induction

lukeprog1 Dec 2011 6:56 UTC
14 points
31 comments10 min readLW link

This Ter­ri­tory Does Not Exist

ike13 Aug 2020 0:30 UTC
7 points
197 comments7 min readLW link

(A Failed Ap­proach) From Prece­dent to Utility Function

Akira Pyinya29 Apr 2023 21:55 UTC
0 points
2 comments4 min readLW link

A Brief In­tro­duc­tion to ACI, 2: An Event-Cen­tric View

Akira Pyinya12 Apr 2023 3:23 UTC
3 points
0 comments2 min readLW link

Ex­cerpt from Ar­bital Solomonoff in­duc­tion dialogue

Richard_Ngo17 Jan 2021 3:49 UTC
36 points
6 comments5 min readLW link
(arbital.com)

The Solomonoff prior is ma­lign. It’s not a big deal.

Charlie Steiner25 Aug 2022 8:25 UTC
43 points
9 comments7 min readLW link

“The Solomonoff Prior is Mal­ign” is a spe­cial case of a sim­pler argument

David Matolcsi17 Nov 2024 21:32 UTC
131 points
46 comments12 min readLW link

Re­sponse to “What does the uni­ver­sal prior ac­tu­ally look like?”

michaelcohen20 May 2021 16:12 UTC
37 points
33 comments18 min readLW link

ACI#9: What is Intelligence

Akira Pyinya9 Dec 2024 21:54 UTC
3 points
0 comments8 min readLW link

Beyond Re­wards and Values: A Non-du­al­is­tic Ap­proach to Univer­sal Intelligence

Akira Pyinya30 Dec 2022 19:05 UTC
10 points
4 comments14 min readLW link

Pro­saic mis­al­ign­ment from the Solomonoff Predictor

Cleo Nardo9 Dec 2022 17:53 UTC
43 points
3 comments5 min readLW link

Belief in the Im­plied Invisible

Eliezer Yudkowsky8 Apr 2008 7:40 UTC
69 points
35 comments6 min readLW link

De­co­her­ence is Simple

Eliezer Yudkowsky6 May 2008 7:44 UTC
78 points
63 comments11 min readLW link

Sum­mary of the Acausal At­tack Is­sue for AIXI

Diffractor13 Dec 2021 8:16 UTC
12 points
6 comments4 min readLW link

Ap­prox­i­mat­ing Solomonoff Induction

Houshalter29 May 2015 12:23 UTC
13 points
45 comments3 min readLW link

[Question] Why would code/​English or low-ab­strac­tion/​high-ab­strac­tion sim­plic­ity or brevity cor­re­spond?

Elliot Temple4 Sep 2020 19:46 UTC
2 points
15 comments1 min readLW link

Oc­cam’s Razor

Eliezer Yudkowsky26 Sep 2007 6:36 UTC
159 points
55 comments5 min readLW link

The op­ti­mizer won’t just guess your in­tended semantics

Thomas Kehrenberg6 Mar 2025 19:42 UTC
20 points
1 comment6 min readLW link

Weak ar­gu­ments against the uni­ver­sal prior be­ing malign

X4vier14 Jun 2018 17:11 UTC
50 points
23 comments3 min readLW link

Towards build­ing blocks of ontologies

8 Feb 2025 16:03 UTC
29 points
0 comments26 min readLW link

Loss Curves

James Camacho6 May 2025 22:22 UTC
16 points
3 comments4 min readLW link
(github.com)

The Ethics of ACI

Akira Pyinya16 Feb 2023 23:51 UTC
−8 points
0 comments3 min readLW link

Solomonoff In­duc­tion, by Shane Legg

cousin_it21 Feb 2011 0:32 UTC
21 points
8 comments1 min readLW link

What pro­gram struc­tures en­able effi­cient in­duc­tion?

Daniel C5 Sep 2024 10:12 UTC
23 points
5 comments3 min readLW link

Com­men­su­rable Scien­tific Paradigms; or, com­putable induction

samshap13 Apr 2022 0:01 UTC
14 points
0 comments5 min readLW link

A Brief In­tro­duc­tion to Al­gorith­mic Com­mon In­tel­li­gence, ACI . 1

Akira Pyinya5 Apr 2023 5:43 UTC
−2 points
1 comment2 min readLW link

The power of finite and the weak­ness of in­finite bi­nary point numbers

AxiomWriter20 Apr 2024 6:03 UTC
−3 points
6 comments2 min readLW link

Break­ing the Op­ti­mizer’s Curse, and Con­se­quences for Ex­is­ten­tial Risks and Value Learning

Roger Dearnaley21 Feb 2023 9:05 UTC
10 points
1 comment23 min readLW link

ACI #3: The Ori­gin of Goals and Utility

Akira Pyinya17 May 2023 20:47 UTC
1 point
0 comments6 min readLW link

AIT Lec­ture Notes: A learn­ing journey

Itisan Halias19 Jan 2026 0:28 UTC
1 point
0 comments1 min readLW link

Does Solomonoff always win?

cousin_it23 Feb 2011 20:42 UTC
14 points
56 comments2 min readLW link

The prior of a hy­poth­e­sis does not de­pend on its complexity

cousin_it26 Aug 2010 13:20 UTC
34 points
69 comments1 min readLW link