It is, when dealing with sequences that go on to infinity. In that case you get the “KM complexity”, from Definition 4.5.8 of Li & Vitanyi (2019). For online sequence prediction, Solomonoff’s prior needs to sum the weights from every program.
No such complications appear in the entropy of a bounded system at a fixed precision. And ultimately, it appears that for entropy to increase, you need some kind of coarse-graining, leading us to finite strings. I discuss this in the Background section and around Corollary 1.
Notably, the difference can become non-constant if we allow for semi-measures on infinite strings:
https://www.lesswrong.com/posts/KcvJXhKqx4itFNWty/k-complexity-is-silly-use-cross-entropy-instead#LNgzYraTeGcEyry9a
Isn’t that significant?
It is, when dealing with sequences that go on to infinity. In that case you get the “KM complexity”, from Definition 4.5.8 of Li & Vitanyi (2019). For online sequence prediction, Solomonoff’s prior needs to sum the weights from every program.
No such complications appear in the entropy of a bounded system at a fixed precision. And ultimately, it appears that for entropy to increase, you need some kind of coarse-graining, leading us to finite strings. I discuss this in the Background section and around Corollary 1.