In other words, epistemology seems too important to leave to non-mathematical methods.
It doesn’t follow that a particular piece of mathematics is the way to go.
Is there another non-trivial mathematical account of how an agent can come to have accurate knowledge of its environment that is general enough to deserve the name ‘epistemology’?
This is what I was thinking, investing too much time and energy in AIXI simply because it seems to be the most ‘obvious’ option currently available could blind you to other avenues of approach.
I think you should know the central construction, it’s simple enough (half of Hutter’s “gentle introduction” would suffice). But at least read some good textbooks (such as AIMA) that give you overview of the field before charting exploration of primary literature (not sure if you mentioned before what’s your current background).
I own a copy of AIMA, though I admittedly haven’t read it from cover to cover. I did an independent study learning/coding some basic AI stuff about a year ago, the professor introduced me to AIMA.
not sure if you mentioned before what’s your current background
It’s a bit difficult to summarize. Is sort of did so here, but I didn’t include a lot of detail.
I suppose I could try to hit a few specifics; I was jumping around The Handbook of Brain Theory and Neural Networks for a bit, I picked up the overviews and read a few of the articles, but haven’t really come back to it yet; I’ve read a good number of articles from the MIT Encyclopedia of Cognitive Science; I’ve read a (small) portion of “Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems” (I ended up ultimately delving too far into molecular biology and organic chem so I abandoned it for the time being, though I would like to look at Comp Neurosci again, maybe using From Neuron to Brain instead, seems more approachable); I read a bit “Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting” partly to get a sense of just how much current computational models of neurons might diverge from actual neuronal behavior but mostly to get an idea of some alternatives.
As I mentioned in my response to timtyler, I tend to cycle through my readings quite a bit. I like to pick up a small cluster of ideas and let them sink in and move on to something else, coming back to the material later if it still seems relevant to my interests. Once it’s popped up a few times I make a more concerted effort to learn it. In any event My main goal over the past few months was to try to get a better overview of a large amount of material relevant to FAI.
Is there another non-trivial mathematical account of how an agent can come to have accurate knowledge of its environment
Pretty much: Solomonoff Induction. That does most of the work in AIXI. OK, it won’t design experiments for you, but there are various approaches to doing that...
AIXI is more than just Solomonoff induction. It is Solomonoff induction plus some other stuff. I’m a teensy bit concerned that you are giving AIXI credit for Solomonoff induction’s moves.
AIXI is more than just Solomonoff induction. It is Solomonoff induction plus some other stuff.
Right. The other stuff is an account of the most fundamental and elementary kind of reinforcement learning. In my conversations (during meetups to which everyone is invited) with one of the Research Fellows at SIAI, reinforcement learning has come up more than Solomonoff induction.
But yeah, the OP should learn Solomonoff induction first, then decide whether to learn AIXI. That would have happened naturally if he’d started reading Legg’s thesis, unless the OP has some wierd habit of always finishing PhD theses that he has started.
Since we’ve gone back and forth twice, and no one’s upvoted my contributions, this will probably be my last comment in this thread.
Hi, Vladimir!
Is there another non-trivial mathematical account of how an agent can come to have accurate knowledge of its environment that is general enough to deserve the name ‘epistemology’?
This is a bad argument, since the best available option isn’t necessarily a good option.
This is what I was thinking, investing too much time and energy in AIXI simply because it seems to be the most ‘obvious’ option currently available could blind you to other avenues of approach.
I think you should know the central construction, it’s simple enough (half of Hutter’s “gentle introduction” would suffice). But at least read some good textbooks (such as AIMA) that give you overview of the field before charting exploration of primary literature (not sure if you mentioned before what’s your current background).
I own a copy of AIMA, though I admittedly haven’t read it from cover to cover. I did an independent study learning/coding some basic AI stuff about a year ago, the professor introduced me to AIMA.
It’s a bit difficult to summarize. Is sort of did so here, but I didn’t include a lot of detail.
I suppose I could try to hit a few specifics; I was jumping around The Handbook of Brain Theory and Neural Networks for a bit, I picked up the overviews and read a few of the articles, but haven’t really come back to it yet; I’ve read a good number of articles from the MIT Encyclopedia of Cognitive Science; I’ve read a (small) portion of “Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems” (I ended up ultimately delving too far into molecular biology and organic chem so I abandoned it for the time being, though I would like to look at Comp Neurosci again, maybe using From Neuron to Brain instead, seems more approachable); I read a bit “Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting” partly to get a sense of just how much current computational models of neurons might diverge from actual neuronal behavior but mostly to get an idea of some alternatives.
As I mentioned in my response to timtyler, I tend to cycle through my readings quite a bit. I like to pick up a small cluster of ideas and let them sink in and move on to something else, coming back to the material later if it still seems relevant to my interests. Once it’s popped up a few times I make a more concerted effort to learn it. In any event My main goal over the past few months was to try to get a better overview of a large amount of material relevant to FAI.
Pretty much: Solomonoff Induction. That does most of the work in AIXI. OK, it won’t design experiments for you, but there are various approaches to doing that...
When I use the word ‘AIXI’ above, I mean to include Solomonoff induction. I would have thought that was obvious.
One has to learn Solomonoff induction to learn AIXI.
AIXI is more than just Solomonoff induction. It is Solomonoff induction plus some other stuff. I’m a teensy bit concerned that you are giving AIXI credit for Solomonoff induction’s moves.
Right. The other stuff is an account of the most fundamental and elementary kind of reinforcement learning. In my conversations (during meetups to which everyone is invited) with one of the Research Fellows at SIAI, reinforcement learning has come up more than Solomonoff induction.
But yeah, the OP should learn Solomonoff induction first, then decide whether to learn AIXI. That would have happened naturally if he’d started reading Legg’s thesis, unless the OP has some wierd habit of always finishing PhD theses that he has started.
Since we’ve gone back and forth twice, and no one’s upvoted my contributions, this will probably be my last comment in this thread.