Kolmogorov complexity is defined relative to a fixed encoding, and yet this topic seems to be absent from the article.
Writing a solver for a system of linear equations in plain Basic would constitute a decent-size little project, while in Octave it will be a one-liner.
Taking your Tetris example, sure 6KB seems small—as long as you restrict yourself to a space of all possible programs for Gameboy or whichever platform you took this example from. But if your goal is to encode Tetris for a computer engineer who has no knowledge about Gameboy, you will have to include, at the very least, the documentation on the CPU ISA, the hardware architecture of the device and the details on the quirks of its I/O hardware. That would already bring the “size of Tetris” to 10s of megabytes. Describing it for a person from 1950s, I suspect, would require a decent chunk of Internet in addition.
Using genome size as a proxy for an organism’s complexity is very misleading, as it sweeps under the rug the huge (and difficult to estimate quantitatively) amount of knowledge about the world that has been extracted by the 3.5 billions of years of evolution and baked into the encoding the contemporary biology is running on.
Using genome size as a proxy for an organism’s complexity is very misleading, as it sweeps under the rug the huge (and difficult to estimate quantitatively) amount of knowledge about the world that has been extracted by the 3.5 billions of years of evolution and baked into the encoding the contemporary biology is running on.
I’m actually convinced that at least here, evolution mostly cannot do this, and that the ability to extract knowledge about the world and transmit it to the next generation correctly enough to get a positive feedback loop is the main reason why humanity has catapulted into the stratosphere, and it’s rare for this in general to happen.
More generally, I’m very skeptical of the idea that much learning happens through natural selection, and the stuff about epigenetics that was proposed as a way for natural selection to encode learned knowledge is more-or-less fictional:
Taking your Tetris example, sure 6KB seems small—as long as you restrict yourself to a space of all possible programs for Gameboy or whichever platform you took this example from. But if your goal is to encode Tetris for a computer engineer who has no knowledge about Gameboy, you will have to include, at the very least, the documentation on the CPU ISA, the hardware architecture of the device and the details on the quirks of its I/O hardware. That would already bring the “size of Tetris” to 10s of megabytes. Describing it for a person from 1950s, I suspect, would require a decent chunk of Internet in addition.
I don’t think this is making it a fairer comparison. For bacteria, doesn’t that mean you’d have to include descriptions of DNA, amino acids, proteins in general and everything known about the specific proteins used by the bacteria, etc? You quickly end up with a decent chunk of the Internet as well.
Kolgomorov complexity is not about how much background knowledge or computational effort was required to produce some from first principles output. It is about how much, given infinite knowledge and time, you can compress a complete description of the output. Which maybe means it’s not the right metric to use here...
Kolmogorov complexity is defined relative to a fixed encoding, and yet this topic seems to be absent from the article.
Writing a solver for a system of linear equations in plain Basic would constitute a decent-size little project, while in Octave it will be a one-liner.
Taking your Tetris example, sure 6KB seems small—as long as you restrict yourself to a space of all possible programs for Gameboy or whichever platform you took this example from. But if your goal is to encode Tetris for a computer engineer who has no knowledge about Gameboy, you will have to include, at the very least, the documentation on the CPU ISA, the hardware architecture of the device and the details on the quirks of its I/O hardware. That would already bring the “size of Tetris” to 10s of megabytes. Describing it for a person from 1950s, I suspect, would require a decent chunk of Internet in addition.
Using genome size as a proxy for an organism’s complexity is very misleading, as it sweeps under the rug the huge (and difficult to estimate quantitatively) amount of knowledge about the world that has been extracted by the 3.5 billions of years of evolution and baked into the encoding the contemporary biology is running on.
I’m actually convinced that at least here, evolution mostly cannot do this, and that the ability to extract knowledge about the world and transmit it to the next generation correctly enough to get a positive feedback loop is the main reason why humanity has catapulted into the stratosphere, and it’s rare for this in general to happen.
More generally, I’m very skeptical of the idea that much learning happens through natural selection, and the stuff about epigenetics that was proposed as a way for natural selection to encode learned knowledge is more-or-less fictional:
https://www.lesswrong.com/posts/zazA44CaZFE7rb5zg/transhumanism-genetic-engineering-and-the-biological-basis#JeDuMpKED7k9zAiYC
I don’t think this is making it a fairer comparison. For bacteria, doesn’t that mean you’d have to include descriptions of DNA, amino acids, proteins in general and everything known about the specific proteins used by the bacteria, etc? You quickly end up with a decent chunk of the Internet as well.
Kolgomorov complexity is not about how much background knowledge or computational effort was required to produce some from first principles output. It is about how much, given infinite knowledge and time, you can compress a complete description of the output. Which maybe means it’s not the right metric to use here...