I think we may not disagree about any truth-claims about the world. I’m just satisfied that the north star of Solomonoff induction exists at all, and that it is as computable (albeit only semicomputable), well-predicting, science-compatible and precise as it is. I expected less from a theory that seems so unpopular.
> It predicts well: It’s provenly a really good predictor
So can you point to any example of anyone ever predicting anything using it?
No, but crucially, I’ve also never seen anyone predict as well as someone using Solomonoff induction with any other method :)
Also, there’s actually a decent argument that LLMs can be viewed as approximating something like Solomonoff induction. For instance my ARENA final project studied the ability of LLMs to approximate Solomonoff induction with pretty good results.
Lately there has been some (still limited) empirical success pretraining transformers on program outputs or some such inspired directly by Solomonoff induction—see “universal pretraining”
I think we may not disagree about any truth-claims about the world. I’m just satisfied that the north star of Solomonoff induction exists at all, and that it is as computable (albeit only semicomputable), well-predicting, science-compatible and precise as it is. I expected less from a theory that seems so unpopular.
No, but crucially, I’ve also never seen anyone predict as well as someone using Solomonoff induction with any other method :)
Also, there’s actually a decent argument that LLMs can be viewed as approximating something like Solomonoff induction. For instance my ARENA final project studied the ability of LLMs to approximate Solomonoff induction with pretty good results.
Lately there has been some (still limited) empirical success pretraining transformers on program outputs or some such inspired directly by Solomonoff induction—see “universal pretraining”