# Solomonoff Induction

TagLast edit: 21 Apr 2022 23:14 UTC by

Solomonoff induction is an inference system defined by Ray Solomonoff that will learn to correctly predict any computable sequence with only the absolute minimum amount of data. This system, in a certain sense, is the perfect universal prediction algorithm.

To summarize it very informally, Solomonoff induction works by:

• Starting with all possible hypotheses (sequences) as represented by computer programs (that generate those sequences), weighted by their simplicity (2-n, where n is the program length);

• Discarding those hypotheses that are inconsistent with the data.

Weighting hypotheses by simplicity, the system automatically incorporates a form of Occam’s razor, which is why it has been playfully referred to as Solomonoff’s lightsaber.

Solomonoff induction gets off the ground with a solution to the “problem of the priors”. Suppose that you stand before a universal prefix Turing machine U. You are interested in a certain finite output string y0. In particular, you want to know the probability that U will produce the output y0 given a random input tape. This probability is the Solomonoff a priori probability of y0.

More precisely, suppose that a particular infinite input string x0 is about to be fed into U. However, you know nothing about x0 other than that each term of the string is either 0 or 1. As far as your state of knowledge is concerned, the ith digit of x0 is as likely to be 0 as it is to be 1, for all i = 1, 2, …. You want to find the a priori probability m(y0) of the following proposition:

(*) If U takes in x0 as input, then U will produce output y0 and then halt.

Unfortunately, computing the exact value of m(y0) would require solving the halting problem, which is undecidable. Nonetheless, it is easy to derive an expression for m(y0). If U halts on an infinite input string x, then U must read only a finite initial segment of x, after which U immediately halts. We call a finite string p a self-delimiting program if and only if there exists an infinite input string x beginning with p such that U halts on x immediately after reading to the end of p. The set 𝒫 of self-delimiting programs is the prefix code for U. It is the determination of the elements of 𝒫 that requires a solution to the halting problem.

Given p ∈ 𝒫, we write “prog (x0) = p” to express the proposition that x0 begins with p, and we write “U(p) = y0″ to express the proposition that U produces output y0, and then halts, when fed any input beginning with p. Proposition (*) is then equivalent to the exclusive disjunction

p ∈ 𝒫: U(p) = y0(prog (x0) = p).
Since x0 was chosen at random from {0, 1}ω, we take the probability of prog (x0) = p to be 2 − ℓ(p), where ℓ(p) is the length of p as a bit string. Hence, the probability of (*) is

m(y0) := ∑p ∈ 𝒫: U(p) = y02 − ℓ(p).

# The Solomonoff Prior is Malign

14 Oct 2020 1:33 UTC
168 points

# An In­tu­itive Ex­pla­na­tion of Solomonoff Induction

11 Jul 2012 8:05 UTC
157 points

# A Semitech­ni­cal In­tro­duc­tory Dialogue on Solomonoff Induction

4 Mar 2021 17:27 UTC
139 points

# Open Prob­lems Re­lated to Solomonoff Induction

6 Jun 2012 0:26 UTC
43 points

# Solomonoff in­duc­tion still works if the uni­verse is un­com­putable, and its use­ful­ness doesn’t re­quire know­ing Oc­cam’s razor

18 Jun 2023 1:52 UTC
38 points

# When does ra­tio­nal­ity-as-search have non­triv­ial im­pli­ca­tions?

4 Nov 2018 22:42 UTC
66 points

# The Prob­lem of the Criterion

21 Jan 2021 15:05 UTC
56 points

# [Question] How is Solomonoff in­duc­tion calcu­lated in prac­tice?

4 Jun 2019 10:11 UTC
33 points

# Com­pu­ta­tional Model: Causal Di­a­grams with Symmetry

22 Aug 2019 17:54 UTC
53 points

# Solomonoff Cartesianism

2 Mar 2014 17:56 UTC
51 points

# [Question] Is the hu­man brain a valid choice for the Univer­sal Tur­ing Ma­chine in Solomonoff In­duc­tion?

8 Dec 2018 1:49 UTC
22 points

# Clar­ify­ing Con­se­quen­tial­ists in the Solomonoff Prior

11 Jul 2018 2:35 UTC
20 points

# A po­ten­tial prob­lem with us­ing Solomonoff in­duc­tion as a prior

7 Apr 2011 19:27 UTC
18 points

# Clar­ify­ing The Mal­ig­nity of the Univer­sal Prior: The Lex­i­cal Update

15 Jan 2020 0:00 UTC
20 points

# K-com­plex­ity is silly; use cross-en­tropy instead

20 Dec 2022 23:06 UTC
136 points

# Reflec­tive AIXI and Anthropics

24 Sep 2018 2:15 UTC
18 points

# Asymp­totic Log­i­cal Uncer­tainty: Con­crete Failure of the Solomonoff Approach

22 Jul 2015 19:27 UTC
13 points

# My im­pres­sion of sin­gu­lar learn­ing theory

18 Jun 2023 15:34 UTC
45 points

# Math­e­mat­i­cal In­con­sis­tency in Solomonoff In­duc­tion?

25 Aug 2020 17:09 UTC
7 points

# Why you can’t treat de­cid­abil­ity and com­plex­ity as a con­stant (Post #1)

26 Jul 2023 17:54 UTC
6 points

# Limited agents need ap­prox­i­mate induction

24 Apr 2015 7:42 UTC
16 points

# Solomonoff In­duc­tion and Sleep­ing Beauty

17 Nov 2020 2:28 UTC
7 points

# Solomonoff In­duc­tion ex­plained via di­a­log.

21 Sep 2017 5:27 UTC
3 points
(arbital.com)

# Oc­cam’s Ra­zor and the Univer­sal Prior

3 Oct 2021 3:23 UTC
28 points

# What is the ad­van­tage of the Kol­mogorov com­plex­ity prior?

16 Feb 2012 1:51 UTC
18 points

# Are the fun­da­men­tal phys­i­cal con­stants com­putable?

5 Apr 2022 15:05 UTC
15 points

# From the “weird math ques­tions” de­part­ment...

9 Aug 2012 7:19 UTC
7 points

# Pas­cal’s Mug­ging: Tiny Prob­a­bil­ities of Vast Utilities

19 Oct 2007 23:37 UTC
105 points

# Mul­ti­ple Wor­lds, One Univer­sal Wave Function

4 Nov 2020 22:28 UTC
53 points

# [Question] Ques­tions about Solomonoff induction

10 Jan 2024 1:16 UTC
7 points

# An ad­di­tional prob­lem with Solomonoff induction

22 Jan 2014 23:34 UTC
3 points

# Re­marks 1–18 on GPT (com­pressed)

20 Mar 2023 22:27 UTC
146 points

# Pre­dic­tion can be Outer Aligned at Optimum

10 Jan 2021 18:48 UTC
15 points

# Ex­cerpt from Ar­bital Solomonoff in­duc­tion dialogue

17 Jan 2021 3:49 UTC
36 points
(arbital.com)

# A Brief In­tro­duc­tion to ACI, 2: An Event-Cen­tric View

12 Apr 2023 3:23 UTC
1 point

# Re­sponse to “What does the uni­ver­sal prior ac­tu­ally look like?”

20 May 2021 16:12 UTC
36 points

# Does Solomonoff always win?

23 Feb 2011 20:42 UTC
14 points

# The Solomonoff prior is ma­lign. It’s not a big deal.

25 Aug 2022 8:25 UTC
41 points

# Sum­mary of the Acausal At­tack Is­sue for AIXI

13 Dec 2021 8:16 UTC
12 points

# [Question] Gen­er­al­iza­tion of the Solomonoff In­duc­tion to Ac­cu­racy—Is it pos­si­ble? Would it be use­ful?

20 Feb 2022 19:29 UTC
2 points

# Com­men­su­rable Scien­tific Paradigms; or, com­putable induction

13 Apr 2022 0:01 UTC
14 points

# Pro­saic mis­al­ign­ment from the Solomonoff Predictor

9 Dec 2022 17:53 UTC
40 points

# all claw, no world — and other thoughts on the uni­ver­sal distribution

14 Dec 2022 18:55 UTC
15 points

# Beyond Re­wards and Values: A Non-du­al­is­tic Ap­proach to Univer­sal Intelligence

30 Dec 2022 19:05 UTC
10 points

# The Ethics of ACI

16 Feb 2023 23:51 UTC
−8 points

# In­tu­itive Ex­pla­na­tion of Solomonoff Induction

1 Dec 2011 6:56 UTC
14 points

# Break­ing the Op­ti­mizer’s Curse, and Con­se­quences for Ex­is­ten­tial Risks and Value Learning

21 Feb 2023 9:05 UTC
10 points

# Solomonoff’s solip­sism

8 May 2023 6:55 UTC
−13 points

# Belief in the Im­plied Invisible

8 Apr 2008 7:40 UTC
57 points

# (A Failed Ap­proach) From Prece­dent to Utility Function

29 Apr 2023 21:55 UTC
0 points

# ACI #3: The Ori­gin of Goals and Utility

17 May 2023 20:47 UTC
1 point

# Weak ar­gu­ments against the uni­ver­sal prior be­ing malign

14 Jun 2018 17:11 UTC
50 points

# De­co­her­ence is Simple

6 May 2008 7:44 UTC
67 points

# This Ter­ri­tory Does Not Exist

13 Aug 2020 0:30 UTC
7 points

# Solomonoff In­duc­tion, by Shane Legg

21 Feb 2011 0:32 UTC
21 points

# How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

11 Jul 2023 19:27 UTC
17 points

# The prior of a hy­poth­e­sis does not de­pend on its complexity

26 Aug 2010 13:20 UTC
34 points

# [Question] Why would code/​English or low-ab­strac­tion/​high-ab­strac­tion sim­plic­ity or brevity cor­re­spond?

4 Sep 2020 19:46 UTC
2 points

# Ap­prox­i­mat­ing Solomonoff Induction

29 May 2015 12:23 UTC
13 points