Enumerate and run every bitstring, increasing in length, until one emits the requisite bitstring; by construction, this program is the shortest bitstring which emits it and so the program is the Kolmogorov complexity. Many of these programs will not terminate, so you use the dovetail strategy to allocate computing time to every program.
Unfortunately, you cannot prove whether or not an arbitrary program will terminate, meaning the best this scheme will do is provide an upper bound for KC.
Without waiting forever there’s no way of knowing if one of the programs smaller than that bound, which hasn’t terminated yet, isn’t going to eventually terminate and output the string.
Gwern: I think you understand this, but for the benefit of other readers:
The strategy of enumerate, run in parallel, and pick the first to halt doesn’t give Kolmogorov complexity. It gives an upper bound. There might be some shorter program that will halt and give the appropriate output, but it just hasn’t gotten there yet when you find the first thing that halts.
Note that this only proves the minimum complexity in the system used to run this operation; it could have a different complexity in a different system.
[ETA]: Also, I think this runs into the Chaitin issue. (Personally, I think the issue is with the flawed definition of “complexity” such that it incorporates only the size of the reference, and not the processing power necessary to disentangle the target from the reference.)
Kolmogorov complexity isn’t computable, so how do you do this?
Enumerate and run every bitstring, increasing in length, until one emits the requisite bitstring; by construction, this program is the shortest bitstring which emits it and so the program is the Kolmogorov complexity. Many of these programs will not terminate, so you use the dovetail strategy to allocate computing time to every program.
Unfortunately, you cannot prove whether or not an arbitrary program will terminate, meaning the best this scheme will do is provide an upper bound for KC.
Without waiting forever there’s no way of knowing if one of the programs smaller than that bound, which hasn’t terminated yet, isn’t going to eventually terminate and output the string.
Yes, that’s the loophole for Manfred’s scheme (if I haven’t read into his comment something he didn’t intend).
Gwern: I think you understand this, but for the benefit of other readers:
The strategy of enumerate, run in parallel, and pick the first to halt doesn’t give Kolmogorov complexity. It gives an upper bound. There might be some shorter program that will halt and give the appropriate output, but it just hasn’t gotten there yet when you find the first thing that halts.
Note that this only proves the minimum complexity in the system used to run this operation; it could have a different complexity in a different system.
[ETA]: Also, I think this runs into the Chaitin issue. (Personally, I think the issue is with the flawed definition of “complexity” such that it incorporates only the size of the reference, and not the processing power necessary to disentangle the target from the reference.)