And I may be more leaning toward the “fallacy of compression” side, I’ll grant that. But I don’t see how you’d disagree with it since you find the subdivision I outlined to have some potential. If people are unknowingly shifting between two very different meanings of intelligence, that certainly is a fallacy of compression.
Another point: I’m not sure your description of AIXI is particularly great. AIXI works where Solomonoff induction works. Solomonoff induction works pretty well in this world. It might not be perfect—due to reference machine issues—but it is pretty good. AIXI would work very badly in worlds where Solomonoff induction was a misleading guide to its sense data. Its performance in this world doesn’t suffer through trying to deal with those worlds—since in those worlds it would be screwed.
Well, actually you’re highlighting the issue I raised in my first post: computable approximations of Solomonoff induction work pretty well … when fed useful priors! But those priors come from a lot of implicit knowledge about the world that skips over an exponentially large number of shorter hypotheses by the time you get to applying it to any specific problem.
AIXI (and computable approximations), starting from a purely Occamian prior, is stuck iterating through lots of generating functions before it gets to the right one—unfeasably long. To speed it up you have to feed it knowledge you gained elsewhere (and of course, find a way to represent that knowledge). But at that point, your prior includes a lot more than a penalty for length!
Okay, thanks for the proper feedback :-)
And I may be more leaning toward the “fallacy of compression” side, I’ll grant that. But I don’t see how you’d disagree with it since you find the subdivision I outlined to have some potential. If people are unknowingly shifting between two very different meanings of intelligence, that certainly is a fallacy of compression.
Another point: I’m not sure your description of AIXI is particularly great. AIXI works where Solomonoff induction works. Solomonoff induction works pretty well in this world. It might not be perfect—due to reference machine issues—but it is pretty good. AIXI would work very badly in worlds where Solomonoff induction was a misleading guide to its sense data. Its performance in this world doesn’t suffer through trying to deal with those worlds—since in those worlds it would be screwed.
Well, actually you’re highlighting the issue I raised in my first post: computable approximations of Solomonoff induction work pretty well … when fed useful priors! But those priors come from a lot of implicit knowledge about the world that skips over an exponentially large number of shorter hypotheses by the time you get to applying it to any specific problem.
AIXI (and computable approximations), starting from a purely Occamian prior, is stuck iterating through lots of generating functions before it gets to the right one—unfeasably long. To speed it up you have to feed it knowledge you gained elsewhere (and of course, find a way to represent that knowledge). But at that point, your prior includes a lot more than a penalty for length!