Agreed. The primary thing Solomonoff induction doesn’t take into account is computational complexity/ compute. But… you can simply include a reasonable time-penalty and most of the results mostly go through. It becomes a bit more like logical inductors.
Solomonoff induction also dovetails (hah) nicely with the fact that next-token prediction was all you need for intelligence.[1]
If logical inductors is what one wants, just do that.
a reasonable time-penalty
I’m not entirely sure, but I suspect that I don’t want any time penalty in my (typical human) prior. E.g. even if quantum mechanics takes non-polynomial time to simulate, I still think it a likely hypothesis. Time penalty just doesn’t seem to be related to what I pay attention to when I access my prior for the laws of physics / fundamental hypotheses. There’s also many other ideas for augmenting a simplicity prior that fail similar tests.
Agreed. The primary thing Solomonoff induction doesn’t take into account is computational complexity/ compute. But… you can simply include a reasonable time-penalty and most of the results mostly go through. It becomes a bit more like logical inductors.
Solomonoff induction also dovetails (hah) nicely with the fact that next-token prediction was all you need for intelligence.[1]
well almost, the gap is exactly AIXI
If logical inductors is what one wants, just do that.
I’m not entirely sure, but I suspect that I don’t want any time penalty in my (typical human) prior. E.g. even if quantum mechanics takes non-polynomial time to simulate, I still think it a likely hypothesis. Time penalty just doesn’t seem to be related to what I pay attention to when I access my prior for the laws of physics / fundamental hypotheses. There’s also many other ideas for augmenting a simplicity prior that fail similar tests.
What do you mean by this?