I appreciate the links, but I’ve already reviewed or am in the process of reviewing most of them. I was looking for something dealing more specifically with the thesis itself, as I’m not sure whether I should read it, or if perhaps I should only read certain parts of it to save time.
Also (more than a bit off topic), I’ve enjoyed a number of your youtube videos in the past. A friend of mine has developed a fascination with keyboards and was quite pleased when he saw your video displaying your keyboard array.
The thesis is probably throwing yourself in at the deep end—not necessarilly the best way to learn—perhaps. It depends a lot on what you have already studied so far, though.
You might be correct on that. As of now I suppose I should focus on mastering the basics. I’ve nearly finished Legg’s write up of Solomonoff Induction, but since it seems like there is a good bit of controversy over the AIXI approach I suppose I’ll go ahead and get a few more of the details of algorithmic probability theory under my belt and move on to something more obviously useful for a bit; like the details of machine learning and vision and maybe the ideas for category theoretic ontologies.
Again, I would point at Solomonoff Induction as being the really key idea, with AIXI being icing and complication to some extent.
The whole area seems under-explored, with many potential low-hanging fruit to me. Which is pretty strange, considering how important the whole area is. Maybe people are put off by all the dense maths.
On the other hand, general systems tend to suffer from jack-of-all-trades syndrome. So: you may have to explore a little and then decide where your talents are best used.
On the other hand, general systems tend to suffer from jack-of-all-trades syndrome. So: you may have to explore a little and then decide where your talents are best used.
This seems to be the biggest issue for me; my tendency is to pick up as much as I can easily digest (relatively so, I do read portions of relevant texts and articles and often work out a few problems when the material calls for it) and move on. I do generally tend to return to certain material to delve deeper after it becomes clear to me that it would be useful to do so.
Most of my knowledge base right now is in mathematical logic (some recursive function theory, theory of computation, computational complexity, a smattering of category theory), some of the more discrete areas of mathematics (mainly algebraic structures, computational algebraic geometry) and analytic philosophy (philosophical logic, philosophy of mathematics, philosophy of science).
Over the past several (4-5) months I’ve been working off-and-on through the material suggested on LessWrong: the sequences, decision theory, Bayesian inference, evolutionary psychology, cognitive psychology, cognitive science etc. I’ve only gotten serious about tackling these and other areas related to FAI over the past couple of months (and very serious over the past few weeks).
Still, nothing seems to pop out at me as ‘best suited for me’.
I’ll keep it up for as long as possible. I tend to become quite obsessed with more ordinary subjects that catch my attention, so this should be the case all the more with FAI as I do take Unfriendly AGI seriously as an existential threat and an FAI seriously as a major, major benefit to humanity.
The material is fundamental. It doesn’t seem to mention The Wirehead Problem, though.
Download it from here: http://www.vetta.org/publications/
SIAI get a pretty favourable writeup. He did get their grant, though. That’s one of the expenditures I approve of.
Looking at Schmidhuber and Hutter should help as well. Some links in the area.
Ideally, start off with Solomonoff induction. My links on that are at the bottom of this page.
I appreciate the links, but I’ve already reviewed or am in the process of reviewing most of them. I was looking for something dealing more specifically with the thesis itself, as I’m not sure whether I should read it, or if perhaps I should only read certain parts of it to save time.
Also (more than a bit off topic), I’ve enjoyed a number of your youtube videos in the past. A friend of mine has developed a fascination with keyboards and was quite pleased when he saw your video displaying your keyboard array.
The thesis is probably throwing yourself in at the deep end—not necessarilly the best way to learn—perhaps. It depends a lot on what you have already studied so far, though.
You might be correct on that. As of now I suppose I should focus on mastering the basics. I’ve nearly finished Legg’s write up of Solomonoff Induction, but since it seems like there is a good bit of controversy over the AIXI approach I suppose I’ll go ahead and get a few more of the details of algorithmic probability theory under my belt and move on to something more obviously useful for a bit; like the details of machine learning and vision and maybe the ideas for category theoretic ontologies.
Again, I would point at Solomonoff Induction as being the really key idea, with AIXI being icing and complication to some extent.
The whole area seems under-explored, with many potential low-hanging fruit to me. Which is pretty strange, considering how important the whole area is. Maybe people are put off by all the dense maths.
On the other hand, general systems tend to suffer from jack-of-all-trades syndrome. So: you may have to explore a little and then decide where your talents are best used.
This seems to be the biggest issue for me; my tendency is to pick up as much as I can easily digest (relatively so, I do read portions of relevant texts and articles and often work out a few problems when the material calls for it) and move on. I do generally tend to return to certain material to delve deeper after it becomes clear to me that it would be useful to do so.
Most of my knowledge base right now is in mathematical logic (some recursive function theory, theory of computation, computational complexity, a smattering of category theory), some of the more discrete areas of mathematics (mainly algebraic structures, computational algebraic geometry) and analytic philosophy (philosophical logic, philosophy of mathematics, philosophy of science).
Over the past several (4-5) months I’ve been working off-and-on through the material suggested on LessWrong: the sequences, decision theory, Bayesian inference, evolutionary psychology, cognitive psychology, cognitive science etc. I’ve only gotten serious about tackling these and other areas related to FAI over the past couple of months (and very serious over the past few weeks).
Still, nothing seems to pop out at me as ‘best suited for me’.
OK. Keep going—or take a break. Good luck!
Thanks!
I’ll keep it up for as long as possible. I tend to become quite obsessed with more ordinary subjects that catch my attention, so this should be the case all the more with FAI as I do take Unfriendly AGI seriously as an existential threat and an FAI seriously as a major, major benefit to humanity.