I think this thought experiment is nice because it reveals the pointlessness of a lot of philosophical debates about Solomonoff, Bayes, etc. Of course the colonists have to choose a prior before the moment of parting, and of course if they choose a good prior they will get short codes. And the Solomonoff distribution may not be perfect in some metaphysical sense, but it’s obviously the right prior to choose in the large T regime. Better world-specific formats exist, but their benefit is small compared to T.
Well, the thought experiment doesn’t accomplish that. Solomonoff induction is not necessarily optimal (and most probably isn’t optimal) in your scenario, even and especially for large T. The amount of time it takes for any computable Occamian approximation of S/I to find the the optimal encoding, is superexponential in the length of the raw source data. So the fact that it will eventually get to a superior or near-superior encoding is little consolation, when Alpha Centauri and Sol will have long burned out before Solomonoff has converged on a solution.
The inferiority of Solomonoff Occamian induction, of iterating up through shorter generating algorithms until the data is matched, is not some metaphysical or philosophical issue, but rather, deals directly with the real-world time constraints that arise in practical situations.
My point is, any practical attempt to incorporate Solomonoff induction must also make use of knowledge of the data’s regularity that was found some other way, making it questionable whether Solomonoff induction incorporates everything we mean by “intelligence”. This incompleteness also raises the issue of what this-world-specific methods we actually did use to get to our current state of knowledge that makes Bayesian inference actually effective.
Well, the thought experiment doesn’t accomplish that. Solomonoff induction is not necessarily optimal (and most probably isn’t optimal) in your scenario, even and especially for large T. The amount of time it takes for any computable Occamian approximation of S/I to find the the optimal encoding, is superexponential in the length of the raw source data. So the fact that it will eventually get to a superior or near-superior encoding is little consolation, when Alpha Centauri and Sol will have long burned out before Solomonoff has converged on a solution.
The inferiority of Solomonoff Occamian induction, of iterating up through shorter generating algorithms until the data is matched, is not some metaphysical or philosophical issue, but rather, deals directly with the real-world time constraints that arise in practical situations.
My point is, any practical attempt to incorporate Solomonoff induction must also make use of knowledge of the data’s regularity that was found some other way, making it questionable whether Solomonoff induction incorporates everything we mean by “intelligence”. This incompleteness also raises the issue of what this-world-specific methods we actually did use to get to our current state of knowledge that makes Bayesian inference actually effective.