This sort of processing shouldn’t be thought of as compression. Like a visual system that let’s the AI “sense” the object coordinates in a video feed, the stuff done by a sensory modality for code would just throw away most of the information.
True. This is a sort of lossy compression. Or several layers of lossy compression if we want the AI to be able to look at different levels of structure. But lossy compression is the huge space of everything that throws away some information and reduces message size—I think it takes more work to find the stuff we want within that space than it takes to note that we want lossy compression.
And considered as compression, even lossy compression (or co-compression with the AI’s source code, or any such guff), a sensory modality is probably suboptimal, because it has to serve other purposes like making it convenient for the AI to think about code.
That is how “lossy” compression works.
True. This is a sort of lossy compression. Or several layers of lossy compression if we want the AI to be able to look at different levels of structure. But lossy compression is the huge space of everything that throws away some information and reduces message size—I think it takes more work to find the stuff we want within that space than it takes to note that we want lossy compression.
And considered as compression, even lossy compression (or co-compression with the AI’s source code, or any such guff), a sensory modality is probably suboptimal, because it has to serve other purposes like making it convenient for the AI to think about code.