They are equal! As discussed in the comments to that post, the difference is at most a constant; it’s possible to make that constant vanish by an appropriate choice of reference universal Turning machine.
It is, when dealing with sequences that go on to infinity. In that case you get the “KM complexity”, from Definition 4.5.8 of Li & Vitanyi (2019). For online sequence prediction, Solomonoff’s prior needs to sum the weights from every program.
No such complications appear in the entropy of a bounded system at a fixed precision. And ultimately, it appears that for entropy to increase, you need some kind of coarse-graining, leading us to finite strings. I discuss this in the Background section and around Corollary 1.
They are equal! As discussed in the comments to that post, the difference is at most a constant; it’s possible to make that constant vanish by an appropriate choice of reference universal Turning machine.
Notably, the difference can become non-constant if we allow for semi-measures on infinite strings:
https://www.lesswrong.com/posts/KcvJXhKqx4itFNWty/k-complexity-is-silly-use-cross-entropy-instead#LNgzYraTeGcEyry9a
Isn’t that significant?
It is, when dealing with sequences that go on to infinity. In that case you get the “KM complexity”, from Definition 4.5.8 of Li & Vitanyi (2019). For online sequence prediction, Solomonoff’s prior needs to sum the weights from every program.
No such complications appear in the entropy of a bounded system at a fixed precision. And ultimately, it appears that for entropy to increase, you need some kind of coarse-graining, leading us to finite strings. I discuss this in the Background section and around Corollary 1.