You need to understand Solomonoff’s and Hutter’s ideas first to see where Legg is coming from. One of the best introductions to these topics available online is Legg’s “Solomonoff Induction”, though Li and Vitanyi’s book is more thorough if you can get it. Legg’s paper about prediction is very nice. I haven’t studied his other papers but they’re probably nice too. He comes across as a smart and cautious researcher who doesn’t make technical mistakes. His thesis seems to be a compilation of his previous papers, so maybe you’re better off just reading them.
The thesis is quite readable and I found it valuable to sink deeply into the paradigm, rather than have things spread out over a bunch of papers.
The most worthless part of the thesis, IIRC*, was his discussion and collecting of definitions of intelligence; it doesn’t help persuade anyone of the intelligence=sequence-prediction claim, and just takes up space.
* It’s been a while; I’ve forgotten whether the thesis actually covers this or whether I’m thinking of another paper.
I’ve got Li and Vitanyi’s book and am currently working through the Algorithmic Probability Theory sequence they suggest. I am also working through Legg’s Solomonoff Induction paper.
I actually commented on your thread from February earlier today mentioning this paper, which seems to deal with the issues related to semi-measures in detail (something that you were indicating was very important) and it seems to do so in the context of the quote from Eliezer.
In particular, from the abstract:
Universal semimeasures work by modelling the sequence as generated by an unknown program running on a universal computer. Although these predictors are uncomputable, and so cannot be implemented in practice, the serve to describe an ideal: an existence proof for systems that predict better than humans.
Yes, I already rederived most of these results and even made a tiny little bit of progress on the fringe :-) But it turned out to be tangential to the problem I’m trying to solve.
You need to understand Solomonoff’s and Hutter’s ideas first to see where Legg is coming from. One of the best introductions to these topics available online is Legg’s “Solomonoff Induction”, though Li and Vitanyi’s book is more thorough if you can get it. Legg’s paper about prediction is very nice. I haven’t studied his other papers but they’re probably nice too. He comes across as a smart and cautious researcher who doesn’t make technical mistakes. His thesis seems to be a compilation of his previous papers, so maybe you’re better off just reading them.
The thesis is quite readable and I found it valuable to sink deeply into the paradigm, rather than have things spread out over a bunch of papers.
The most worthless part of the thesis, IIRC*, was his discussion and collecting of definitions of intelligence; it doesn’t help persuade anyone of the intelligence=sequence-prediction claim, and just takes up space.
* It’s been a while; I’ve forgotten whether the thesis actually covers this or whether I’m thinking of another paper.
I’ve got Li and Vitanyi’s book and am currently working through the Algorithmic Probability Theory sequence they suggest. I am also working through Legg’s Solomonoff Induction paper.
I actually commented on your thread from February earlier today mentioning this paper, which seems to deal with the issues related to semi-measures in detail (something that you were indicating was very important) and it seems to do so in the context of the quote from Eliezer.
In particular, from the abstract:
Yes, I already rederived most of these results and even made a tiny little bit of progress on the fringe :-) But it turned out to be tangential to the problem I’m trying to solve.