Do people still put stock in AIXI? I’m considering whether it’s worthwhile for me to invest time learning about Solomonoff induction etc. Currently leaning towards “no” or “aggressively 80⁄20 to get a few probably-correct high-level takeaways”.
Edit: Maybe a better question is, has AIXI substantially informed your worldview / do you think it conveys useful ideas and formalisms about AI
I find it valuable to know about AIXI specifically and Algorithmic information theory generally. That doesn’t mean it is useful for you however.
If you are not interested in math and mathematical approaches to alignment I would guess all value in AIXI is low.
An exception is that knowing about AIXI can inoculate one against the wrong but very common intuitions that
(i) AGI is about capabilities (ii) AGI doesnt exist or that (iii) RL is outdated (iv) that pure scaling next-token prediction will lead to AGI, (v) that there are lots of ways to create AGI and the use of RL is a design choice [no silly].
The talk about kolmogorov complexity and uncomputable priors is a bit of a distraction from the overall point that there is an actual True Name of General Intelligence which is an artificial “Universal Intelligence”, where universal must be read with a large number of asterisks.
One can understand this point without understanding the details of AIXI and I think it is mostly distinct but could help.
Defining, describing, mathematizing, conceptualizing intelligence is an ongoing research programme. AIXI (and its many variants like AIXI-tl) is a very idealized and simplistic model of general intelligence but it’s a foothold for the eventual understanding that will emerge.
knowing about AIXI can inoculate one against the wrong but very common intuitions that (i) AGI is about capabilities (ii) AGI doesnt exist or that (iii) RL is outdated (iv) that pure scaling next-token prediction will lead to AGI, (v) that there are lots of ways to create AGI and the use of RL is a design choice [no silly].
I found this particularly insightful! Thanks for sharing
Based on this I’ll probably do a low-effort skim of LessWrong’s AIXI sequence and see what I find
Do people still put stock in AIXI? I’m considering whether it’s worthwhile for me to invest time learning about Solomonoff induction etc. Currently leaning towards “no” or “aggressively 80⁄20 to get a few probably-correct high-level takeaways”.
Edit: Maybe a better question is, has AIXI substantially informed your worldview / do you think it conveys useful ideas and formalisms about AI
I find it valuable to know about AIXI specifically and Algorithmic information theory generally. That doesn’t mean it is useful for you however.
If you are not interested in math and mathematical approaches to alignment I would guess all value in AIXI is low.
An exception is that knowing about AIXI can inoculate one against the wrong but very common intuitions that (i) AGI is about capabilities (ii) AGI doesnt exist or that (iii) RL is outdated (iv) that pure scaling next-token prediction will lead to AGI, (v) that there are lots of ways to create AGI and the use of RL is a design choice [no silly].
The talk about kolmogorov complexity and uncomputable priors is a bit of a distraction from the overall point that there is an actual True Name of General Intelligence which is an artificial “Universal Intelligence”, where universal must be read with a large number of asterisks. One can understand this point without understanding the details of AIXI and I think it is mostly distinct but could help.
Defining, describing, mathematizing, conceptualizing intelligence is an ongoing research programme. AIXI (and its many variants like AIXI-tl) is a very idealized and simplistic model of general intelligence but it’s a foothold for the eventual understanding that will emerge.
I found this particularly insightful! Thanks for sharing
Based on this I’ll probably do a low-effort skim of LessWrong’s AIXI sequence and see what I find