Whether you use AIXI or IBP, a continual learning algorithm must contend with indexical uncertainty, which means it must contend with indexical complexity in some fashion.
As far as I understand, IBP tries to evaluate hypotheses according to the complexity of the laws of physics, not the bridge transformation (or indexical) information. But that cannot allow it to overcome the fundamental limitations of the first-person perspective faced by an online learner as proved by Shane Legg. That’s a fact about the difficulty of the problem, not a feature (or bug) of the solution.
I agree that Solomonoff induction faces a potential malign prior problem and i default to believing Vanessa that IBP solves this.
By the way, Scott Aaronson has also made this rebuttal to Searle in his paper on why philosophers should care about computational complexity.
Re Scott Aaronson’s rebuttal, I remember he focused on computational complexity, not indexical complexity, and while I think his paper is very useful and makes good points, strictly speaking it doesn’t directly address the argument, while my comment above does.
Though I do think his paper is underrated amongst philosophers (barring the parts about how complexity theory relates to our specific universe).
But that cannot allow it to overcome the fundamental limitations of the first-person perspective faced by an online learner as proved by Shane Legg. That’s a fact about the difficulty of the problem, not a feature (or bug) of the solution.
My question which I forgot to ask is why can’t we reduce the first-person perspective to the third person perspective and have our lower bounds be based on the complexity of the world, rather than the agent?
Whether you use AIXI or IBP, a continual learning algorithm must contend with indexical uncertainty, which means it must contend with indexical complexity in some fashion.
As far as I understand, IBP tries to evaluate hypotheses according to the complexity of the laws of physics, not the bridge transformation (or indexical) information. But that cannot allow it to overcome the fundamental limitations of the first-person perspective faced by an online learner as proved by Shane Legg. That’s a fact about the difficulty of the problem, not a feature (or bug) of the solution.
I agree that Solomonoff induction faces a potential malign prior problem and i default to believing Vanessa that IBP solves this.
By the way, Scott Aaronson has also made this rebuttal to Searle in his paper on why philosophers should care about computational complexity.
Re Scott Aaronson’s rebuttal, I remember he focused on computational complexity, not indexical complexity, and while I think his paper is very useful and makes good points, strictly speaking it doesn’t directly address the argument, while my comment above does.
Though I do think his paper is underrated amongst philosophers (barring the parts about how complexity theory relates to our specific universe).
My question which I forgot to ask is why can’t we reduce the first-person perspective to the third person perspective and have our lower bounds be based on the complexity of the world, rather than the agent?
Because we can’t build agents with a “gods-eye” view of the universe, we can only build agents inside the universe with limited sensors.