RSS

Solomonoff Induction

TagLast edit: 25 Sep 2020 19:51 UTC by Ruby

Solomonoff in­duc­tion is an in­fer­ence sys­tem defined by Ray Solomonoff that will learn to cor­rectly pre­dict any com­putable se­quence with only the ab­solute min­i­mum amount of data. This sys­tem, in a cer­tain sense, is the perfect uni­ver­sal pre­dic­tion al­gorithm.

To sum­ma­rize it very in­for­mally, Solomonoff in­duc­tion works by:

Weight­ing hy­pothe­ses by sim­plic­ity, the sys­tem au­to­mat­i­cally in­cor­po­rates a form of Oc­cam’s ra­zor, which is why it has been playfully referred to as Solomonoff’s lightsaber.

Solomonoff in­duc­tion gets off the ground with a solu­tion to the “prob­lem of the pri­ors”. Sup­pose that you stand be­fore a uni­ver­sal pre­fix Tur­ing ma­chine U. You are in­ter­ested in a cer­tain finite out­put string y0. In par­tic­u­lar, you want to know the prob­a­bil­ity that U will pro­duce the out­put y0 given a ran­dom in­put tape. This prob­a­bil­ity is the Solomonoff a pri­ori prob­a­bil­ity of y0.

More pre­cisely, sup­pose that a par­tic­u­lar in­finite in­put string x0 is about to be fed into U. How­ever, you know noth­ing about x0 other than that each term of the string is ei­ther 0 or 1. As far as your state of knowl­edge is con­cerned, the ith digit of x0 is as likely to be 0 as it is to be 1, for all i = 1, 2, …. You want to find the a pri­ori prob­a­bil­ity m(y0) of the fol­low­ing propo­si­tion:

(*) If U takes in x0 as in­put, then U will pro­duce out­put y0 and then halt.

Un­for­tu­nately, com­put­ing the ex­act value of m(y0) would re­quire solv­ing the halt­ing prob­lem, which is un­de­cid­able. Nonethe­less, it is easy to de­rive an ex­pres­sion for m(y0). If U halts on an in­finite in­put string x, then U must read only a finite ini­tial seg­ment of x, af­ter which U im­me­di­ately halts. We call a finite string p a self-de­limit­ing pro­gram if and only if there ex­ists an in­finite in­put string x be­gin­ning with p such that U halts on x im­me­di­ately af­ter read­ing to the end of p. The set 𝒫 of self-de­limit­ing pro­grams is the pre­fix code for U. It is the de­ter­mi­na­tion of the el­e­ments of 𝒫 that re­quires a solu­tion to the halt­ing prob­lem.

Given p ∈ 𝒫, we write “prog (x0) = p” to ex­press the propo­si­tion that x0 be­gins with p, and we write “U(p) = y0″ to ex­press the propo­si­tion that U pro­duces out­put y0, and then halts, when fed any in­put be­gin­ning with p. Propo­si­tion (*) is then equiv­a­lent to the ex­clu­sive disjunction


p ∈ 𝒫: U(p) = y0(prog (x0) = p).
Since x0 was cho­sen at ran­dom from {0, 1}ω, we take the prob­a­bil­ity of prog (x0) = p to be 2 − ℓ(p), where ℓ(p) is the length of p as a bit string. Hence, the prob­a­bil­ity of (*) is


m(y0) := ∑p ∈ 𝒫: U(p) = y02 − ℓ(p).

See also

References

An In­tu­itive Ex­pla­na­tion of Solomonoff Induction

Alex_Altair11 Jul 2012 8:05 UTC
100 points
220 comments24 min readLW link

The Solomonoff Prior is Malign

Mark Xu14 Oct 2020 1:33 UTC
108 points
25 comments16 min readLW link

Open Prob­lems Re­lated to Solomonoff Induction

Wei_Dai6 Jun 2012 0:26 UTC
27 points
106 comments2 min readLW link

[Question] How is Solomonoff in­duc­tion calcu­lated in prac­tice?

Bucky4 Jun 2019 10:11 UTC
36 points
13 comments1 min readLW link

When does ra­tio­nal­ity-as-search have non­triv­ial im­pli­ca­tions?

nostalgebraist4 Nov 2018 22:42 UTC
67 points
11 comments3 min readLW link

Math­e­mat­i­cal In­con­sis­tency in Solomonoff In­duc­tion?

curi25 Aug 2020 17:09 UTC
7 points
15 comments2 min readLW link

Solomonoff Cartesianism

Rob Bensinger2 Mar 2014 17:56 UTC
34 points
49 comments25 min readLW link

[Question] Is the hu­man brain a valid choice for the Univer­sal Tur­ing Ma­chine in Solomonoff In­duc­tion?

habryka8 Dec 2018 1:49 UTC
21 points
13 comments1 min readLW link

Clar­ify­ing Con­se­quen­tial­ists in the Solomonoff Prior

vlad_m11 Jul 2018 2:35 UTC
19 points
14 comments6 min readLW link

A po­ten­tial prob­lem with us­ing Solomonoff in­duc­tion as a prior

JoshuaZ7 Apr 2011 19:27 UTC
13 points
18 comments1 min readLW link

Clar­ify­ing The Mal­ig­nity of the Univer­sal Prior: The Lex­i­cal Update

interstice15 Jan 2020 0:00 UTC
21 points
2 comments3 min readLW link

Reflec­tive AIXI and Anthropics

Diffractor24 Sep 2018 2:15 UTC
19 points
13 comments8 min readLW link

Asymp­totic Log­i­cal Uncer­tainty: Con­crete Failure of the Solomonoff Approach

Scott Garrabrant22 Jul 2015 19:27 UTC
11 points
0 comments1 min readLW link

Belief in the Im­plied Invisible

Eliezer Yudkowsky8 Apr 2008 7:40 UTC
37 points
33 comments6 min readLW link

Weak ar­gu­ments against the uni­ver­sal prior be­ing malign

X4vier14 Jun 2018 17:11 UTC
52 points
23 comments3 min readLW link

De­co­her­ence is Simple

Eliezer Yudkowsky6 May 2008 7:44 UTC
49 points
61 comments11 min readLW link

This Ter­ri­tory Does Not Exist

ike13 Aug 2020 0:30 UTC
11 points
199 comments7 min readLW link

The prior of a hy­poth­e­sis does not de­pend on its complexity

cousin_it26 Aug 2010 13:20 UTC
26 points
69 comments1 min readLW link

[Question] Why would code/​English or low-ab­strac­tion/​high-ab­strac­tion sim­plic­ity or brevity cor­re­spond?

curi4 Sep 2020 19:46 UTC
2 points
15 comments1 min readLW link
No comments.