Some degree of real-life applicability. If your mathematically precise framework nonetheless requires way more computing power than is available around you (or, in some cases, in the entire observable universe) to approximate it properly, you have a serious practical issue.
It’s how science works: You focus on simple hypotheses and discard/reweight them according to Bayesian reasoning.
The percentage of scientists I know who use explicit Bayesian updating[1] to reweigh hypotheses is a flat 0%. They use Occam’s razor-type intuitions, and those intuitions can be formalized using Solomonoff induction,[2] but that doesn’t mean they are using the latter.
reasonable assumption of a computable world
Reasonable according to what? Substance-free vibes from the Sequences? The map is not the territory. A simplifying mathematical description need not represent the ontologically correct way of identifying something in the territory.
It predicts well: It’s provenly a really good predictor
So can you point to any example of anyone ever predicting anything using it?
Except those intuitions are about science in the real world, and Solomonoff induction requires computability, and even if you approximate it it requires so much computing power that… oh, hey, same objection as before!
I think we may not disagree about any truth-claims about the world. I’m just satisfied that the north star of Solomonoff induction exists at all, and that it is as computable (albeit only semicomputable), well-predicting, science-compatible and precise as it is. I expected less from a theory that seems so unpopular.
> It predicts well: It’s provenly a really good predictor
So can you point to any example of anyone ever predicting anything using it?
No, but crucially, I’ve also never seen anyone predict as well as someone using Solomonoff induction with any other method :)
Also, there’s actually a decent argument that LLMs can be viewed as approximating something like Solomonoff induction. For instance my ARENA final project studied the ability of LLMs to approximate Solomonoff induction with pretty good results.
Lately there has been some (still limited) empirical success pretraining transformers on program outputs or some such inspired directly by Solomonoff induction—see “universal pretraining”
Some degree of real-life applicability. If your mathematically precise framework nonetheless requires way more computing power than is available around you (or, in some cases, in the entire observable universe) to approximate it properly, you have a serious practical issue.
The percentage of scientists I know who use explicit Bayesian updating[1] to reweigh hypotheses is a flat 0%. They use Occam’s razor-type intuitions, and those intuitions can be formalized using Solomonoff induction,[2] but that doesn’t mean they are using the latter.
Reasonable according to what? Substance-free vibes from the Sequences? The map is not the territory. A simplifying mathematical description need not represent the ontologically correct way of identifying something in the territory.
So can you point to any example of anyone ever predicting anything using it?
Or universal Turing Machines to compute the description lenghts of programs meant to represent real-world hypotheses
Except those intuitions are about science in the real world, and Solomonoff induction requires computability, and even if you approximate it it requires so much computing power that… oh, hey, same objection as before!
I think we may not disagree about any truth-claims about the world. I’m just satisfied that the north star of Solomonoff induction exists at all, and that it is as computable (albeit only semicomputable), well-predicting, science-compatible and precise as it is. I expected less from a theory that seems so unpopular.
No, but crucially, I’ve also never seen anyone predict as well as someone using Solomonoff induction with any other method :)
Also, there’s actually a decent argument that LLMs can be viewed as approximating something like Solomonoff induction. For instance my ARENA final project studied the ability of LLMs to approximate Solomonoff induction with pretty good results.
Lately there has been some (still limited) empirical success pretraining transformers on program outputs or some such inspired directly by Solomonoff induction—see “universal pretraining”