2 points -
First, this will be hard to compile information, because of the way the systems work, but seems like a very useful exercise. I would add that the program complexity should include some measure of the “size” of the hardware architecture as well as the libraries, etc. used.
Second, I think that for humans, the relevant size is not just the brain, but the information embedded in the cultural process used for education. This seems vaguely comparable to training data and/or architecture search for ML models, though the analogy should probably be clarified.
I agree that the size of libraries is probably important. For many ML models, things like the under-the-hood optimizer are doing a lot of the “real work”, IMO, rather than the source code that uses the libraries, which is usually much terser.