The thesis is probably throwing yourself in at the deep end—not necessarilly the best way to learn—perhaps. It depends a lot on what you have already studied so far, though.
You might be correct on that. As of now I suppose I should focus on mastering the basics. I’ve nearly finished Legg’s write up of Solomonoff Induction, but since it seems like there is a good bit of controversy over the AIXI approach I suppose I’ll go ahead and get a few more of the details of algorithmic probability theory under my belt and move on to something more obviously useful for a bit; like the details of machine learning and vision and maybe the ideas for category theoretic ontologies.
Again, I would point at Solomonoff Induction as being the really key idea, with AIXI being icing and complication to some extent.
The whole area seems under-explored, with many potential low-hanging fruit to me. Which is pretty strange, considering how important the whole area is. Maybe people are put off by all the dense maths.
On the other hand, general systems tend to suffer from jack-of-all-trades syndrome. So: you may have to explore a little and then decide where your talents are best used.
On the other hand, general systems tend to suffer from jack-of-all-trades syndrome. So: you may have to explore a little and then decide where your talents are best used.
This seems to be the biggest issue for me; my tendency is to pick up as much as I can easily digest (relatively so, I do read portions of relevant texts and articles and often work out a few problems when the material calls for it) and move on. I do generally tend to return to certain material to delve deeper after it becomes clear to me that it would be useful to do so.
Most of my knowledge base right now is in mathematical logic (some recursive function theory, theory of computation, computational complexity, a smattering of category theory), some of the more discrete areas of mathematics (mainly algebraic structures, computational algebraic geometry) and analytic philosophy (philosophical logic, philosophy of mathematics, philosophy of science).
Over the past several (4-5) months I’ve been working off-and-on through the material suggested on LessWrong: the sequences, decision theory, Bayesian inference, evolutionary psychology, cognitive psychology, cognitive science etc. I’ve only gotten serious about tackling these and other areas related to FAI over the past couple of months (and very serious over the past few weeks).
Still, nothing seems to pop out at me as ‘best suited for me’.
I’ll keep it up for as long as possible. I tend to become quite obsessed with more ordinary subjects that catch my attention, so this should be the case all the more with FAI as I do take Unfriendly AGI seriously as an existential threat and an FAI seriously as a major, major benefit to humanity.
The thesis is probably throwing yourself in at the deep end—not necessarilly the best way to learn—perhaps. It depends a lot on what you have already studied so far, though.
You might be correct on that. As of now I suppose I should focus on mastering the basics. I’ve nearly finished Legg’s write up of Solomonoff Induction, but since it seems like there is a good bit of controversy over the AIXI approach I suppose I’ll go ahead and get a few more of the details of algorithmic probability theory under my belt and move on to something more obviously useful for a bit; like the details of machine learning and vision and maybe the ideas for category theoretic ontologies.
Again, I would point at Solomonoff Induction as being the really key idea, with AIXI being icing and complication to some extent.
The whole area seems under-explored, with many potential low-hanging fruit to me. Which is pretty strange, considering how important the whole area is. Maybe people are put off by all the dense maths.
On the other hand, general systems tend to suffer from jack-of-all-trades syndrome. So: you may have to explore a little and then decide where your talents are best used.
This seems to be the biggest issue for me; my tendency is to pick up as much as I can easily digest (relatively so, I do read portions of relevant texts and articles and often work out a few problems when the material calls for it) and move on. I do generally tend to return to certain material to delve deeper after it becomes clear to me that it would be useful to do so.
Most of my knowledge base right now is in mathematical logic (some recursive function theory, theory of computation, computational complexity, a smattering of category theory), some of the more discrete areas of mathematics (mainly algebraic structures, computational algebraic geometry) and analytic philosophy (philosophical logic, philosophy of mathematics, philosophy of science).
Over the past several (4-5) months I’ve been working off-and-on through the material suggested on LessWrong: the sequences, decision theory, Bayesian inference, evolutionary psychology, cognitive psychology, cognitive science etc. I’ve only gotten serious about tackling these and other areas related to FAI over the past couple of months (and very serious over the past few weeks).
Still, nothing seems to pop out at me as ‘best suited for me’.
OK. Keep going—or take a break. Good luck!
Thanks!
I’ll keep it up for as long as possible. I tend to become quite obsessed with more ordinary subjects that catch my attention, so this should be the case all the more with FAI as I do take Unfriendly AGI seriously as an existential threat and an FAI seriously as a major, major benefit to humanity.