RSS

AIXI

TagLast edit: 6 Oct 2017 14:14 UTC by Brian Muhia

Marcus Hutter’s AIXI is the perfect rolling sphere of advanced agent theory—it’s not realistic, but you can’t understand more complicated scenarios if you can’t envision the rolling sphere. At the core of AIXI is Solomonoff induction, a way of using infinite computing power to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with prior probabilities weighted by their algorithmic simplicity, and updating their probabilities based on how well they match observation. We then translate the agent problem into a sequence of percepts, actions, and rewards, so we can use sequence prediction. AIXI is roughly the agent that considers all computable hypotheses to explain the so-far-observed relation of sensory data and actions to rewards, and then searches for the best strategy to maximize future rewards. To a first approximation, AIXI could figure out every ordinary problem that any human being or intergalactic civilization could solve. If AIXI actually existed, it wouldn’t be a god; it’d be something that could tear apart a god like tinfoil.

Further information:

An In­tu­itive Ex­pla­na­tion of Solomonoff Induction

Alex_Altair11 Jul 2012 8:05 UTC
171 points
230 comments24 min readLW link

Failures of an em­bod­ied AIXI

So8res15 Jun 2014 18:29 UTC
50 points
47 comments12 min readLW link

Ap­prox­i­mately Bayesian Rea­son­ing: Knigh­tian Uncer­tainty, Good­hart, and the Look-Else­where Effect

RogerDearnaley26 Jan 2024 3:58 UTC
16 points
2 comments11 min readLW link

In­tu­itive Ex­pla­na­tion of AIXI

Thomas Larsen12 Jun 2022 21:41 UTC
22 points
2 comments5 min readLW link

The Prob­lem with AIXI

Rob Bensinger18 Mar 2014 1:55 UTC
44 points
80 comments23 min readLW link

mAIry’s room: AI rea­son­ing to solve philo­soph­i­cal problems

Stuart_Armstrong5 Mar 2019 20:24 UTC
87 points
41 comments6 min readLW link2 reviews

Reflec­tive AIXI and Anthropics

Diffractor24 Sep 2018 2:15 UTC
18 points
14 comments8 min readLW link

A util­ity-max­i­miz­ing vari­ent of AIXI

AlexMennen17 Dec 2012 3:48 UTC
26 points
22 comments5 min readLW link

New in­tro text­book on AIXI

Alex_Altair11 May 2024 18:18 UTC
50 points
8 comments1 min readLW link

Can AIXI be trained to do any­thing a hu­man can?

Stuart_Armstrong20 Oct 2014 13:12 UTC
5 points
9 comments2 min readLW link

Cor­rigi­bil­ity for AIXI via dou­ble indifference

Stuart_Armstrong4 May 2016 14:00 UTC
0 points
0 comments4 min readLW link

Failures of UDT-AIXI, Part 1: Im­proper Randomizing

Diffractor6 Jan 2019 3:53 UTC
14 points
3 comments4 min readLW link

Po­ten­tial Align­ment men­tal tool: Keep­ing track of the types

Donald Hobson22 Nov 2021 20:05 UTC
29 points
1 comment2 min readLW link

[video] Paul Chris­ti­ano’s im­promptu tu­to­rial on AIXI and TDT

lukeprog19 Mar 2012 17:20 UTC
12 points
13 comments1 min readLW link

Re­but­tals for ~all crit­i­cisms of AIXI

Cole Wyeth7 Jan 2025 17:41 UTC
26 points
17 comments14 min readLW link

Launch­ing new AIXI re­search com­mu­nity web­site + read­ing group(s)

Cole Wyeth13 Aug 2025 17:09 UTC
46 points
2 comments1 min readLW link

AIXI and Ex­is­ten­tial Despair

paulfchristiano8 Dec 2011 20:03 UTC
24 points
39 comments6 min readLW link

Math­e­mat­ics for AIXI and Gödel machine

Faustus222 Jul 2015 18:52 UTC
1 point
6 comments1 min readLW link

The “best pre­dic­tor is mal­i­cious op­ti­miser” problem

Donald Hobson29 Jul 2020 11:49 UTC
14 points
10 comments2 min readLW link

Help re­quest: What is the Kol­mogorov com­plex­ity of com­putable ap­prox­i­ma­tions to AIXI?

AnnaSalamon5 Dec 2010 10:23 UTC
9 points
9 comments1 min readLW link

Save the princess: A tale of AIXI and util­ity functions

Anja1 Feb 2013 15:38 UTC
24 points
11 comments6 min readLW link

Oc­cam’s Ra­zor and the Univer­sal Prior

Peter Chatain3 Oct 2021 3:23 UTC
29 points
5 comments21 min readLW link

“AIXIjs: A Soft­ware Demo for Gen­eral Re­in­force­ment Learn­ing”, As­lanides 2017

gwern29 May 2017 21:09 UTC
7 points
1 comment1 min readLW link
(arxiv.org)

Ver­sions of AIXI can be ar­bi­trar­ily stupid

Stuart_Armstrong10 Aug 2015 13:23 UTC
30 points
59 comments1 min readLW link

LW is to ra­tio­nal­ity as AIXI is to intelligence

XiXiDu6 Mar 2011 20:24 UTC
3 points
46 comments4 min readLW link

How to make AIXI-tl in­ca­pable of learning

itaibn027 Jan 2014 0:05 UTC
7 points
5 comments2 min readLW link

Why you can’t treat de­cid­abil­ity and com­plex­ity as a con­stant (Post #1)

Noosphere8926 Jul 2023 17:54 UTC
6 points
13 comments5 min readLW link

Pro­gram Search and In­com­plete Understanding

Diffractor29 Apr 2018 4:32 UTC
39 points
1 comment4 min readLW link

AIXI-style IQ tests

gwern29 Jan 2011 0:49 UTC
14 points
8 comments1 min readLW link

Would AIXI pro­tect it­self?

Stuart_Armstrong9 Dec 2011 12:29 UTC
15 points
23 comments3 min readLW link

In­ter­view with Vanessa Kosoy on the Value of The­o­ret­i­cal Re­search for AI

WillPetillo4 Dec 2023 22:58 UTC
37 points
0 comments35 min readLW link

Si­mu­lat­ing Syn­thetic Con­scious­ness: Iden­tity, Me­mory, Free Will, and Cul­ture in Ar­tifi­cial Agents

ARamfos30 Apr 2025 21:13 UTC
1 point
0 comments1 min readLW link

Solomonoff Cartesianism

Rob Bensinger2 Mar 2014 17:56 UTC
51 points
51 comments25 min readLW link

Univer­sal agents and util­ity functions

Anja14 Nov 2012 4:05 UTC
43 points
38 comments6 min readLW link

Open Prob­lems in AIXI Agent Foundations

Cole Wyeth12 Sep 2024 15:38 UTC
42 points
2 comments10 min readLW link

Free Will and Dodg­ing Anvils: AIXI Off-Policy

Cole Wyeth29 Aug 2024 22:42 UTC
39 points
12 comments9 min readLW link

Sum­mary of the Acausal At­tack Is­sue for AIXI

Diffractor13 Dec 2021 8:16 UTC
12 points
6 comments4 min readLW link

Pro­posal: Us­ing Monte Carlo tree search in­stead of RLHF for al­ign­ment research

Christopher King20 Apr 2023 19:57 UTC
2 points
7 comments3 min readLW link

Hut­ter-Prize for Prompts

rokosbasilisk24 Mar 2023 21:26 UTC
5 points
10 comments1 min readLW link

Univer­sal AI Max­i­mizes Vari­a­tional Em­pow­er­ment: New In­sights into AGI Safety

Yusuke Hayashi27 Feb 2025 0:46 UTC
13 points
1 comment4 min readLW link

Beyond Re­wards and Values: A Non-du­al­is­tic Ap­proach to Univer­sal Intelligence

Akira Pyinya30 Dec 2022 19:05 UTC
10 points
4 comments14 min readLW link

Meta Pro­gram­ming GPT: A route to Su­per­in­tel­li­gence?

dmtea11 Jul 2020 14:51 UTC
10 points
7 comments4 min readLW link