Semantic description: think of image / video generation from prompts of 10 to 100 words
Putting it all together:
Low end:
High end:
This is the consciously accessed data stream only, which is why it is much smaller than the full human brain.
“But the full latent input-output capabilities of human brain can be obtained by training the brain on its experience!” Yes, and that training makes use of data not consciously accessed, which I believe is much bigger than the consciously-accessed data stream.
Kolmogorov complexity of a human baby’s brain
A baby hasn’t begun learning, so I’ll assume that the human genome is a sufficient description of a baby’s brain.
Kolmogorov complexity
Kolmogorov complexity of any generic human-level observer
I really have no idea. The space of mind designs is huge; there are likely some very compact designs. to bits, maybe?
I think that this should be in a framework that takes into account granularity in some sense. Like I assume you are thinking about the kolmogorov complexity of simulating a system observationally similar to
Case 1: This is either A: A generic, realistic looking adult brain (hard to estimate the complexity), vs B: the brain of an individual person (~amount of synapses).
When you say claude opus is more intelligent than haiku, in case B it would definitely have more complexity, but in case A:
---If opus and Haiku were trained on the same dataset they would have almost the same complexity except for the num_parameters in the code. (Opus would have 1-2 bits more)
---More interestingly, it could be that Opus seems to have more geenral intelligence, rather than just knowledge because the architecture is more expressive and can learn an underlying algorithm that is a little bigger, but if you simulated Opus and Haiku with a more advanced architecture, maybe both would have the same “raw intelligence”. This is related to the texture vs shape bias in image classifiers. Models above 1B start recognizing stuff by shape rather than texture, which seems more like a simple algorithmic improvement rather than something that fundamentlaly required a higher parameter count.
Case 2: your subjective experience (could be compiled into a list of brain activations and sensory data)
Case 3: a generic human baby brain.
Case 4: Step 1: take laws of physics + a PRNG, simulate a universe, presumably it will have some intelligences. Step 2: build an “intelligence detector”, something that can detect human-like civilizations, e.g by finding complex radio emissions, then seek the brains somehow. This likely fits in less than 1MB.
Kolmogorov complexity of the human brain at one instant:
10 to 1000 bits per synapse for weights
Total: to bits
Probably not significantly compressible, considering that e.g. Claude Opus is significantly smarter than Claude Haiku
Kolmogorov complexity of “100 years of subjective experience that thinks he is [puffymist], a particular human who lived on Earth at ”?
Temporal resolution of perception (“frame rate”): 10 to 30 frames per second
excludes audio, which has high sample rate but low bitrate
Uncompressed information per subjective-moment “frame”: to bits per frame
Empirically: conscious processing 40 bits/second, or about 1 bit per “frame”
Let’s say there are to bits of felt-sense “richness” per bit of conscious processing
Compression: call it a factor of (99.9% compression) to (90% compression)
Low-level redundancy: video compression-like between-frame redundancy
High-level redundancy: routines, mental “well-worn grooves”, repetitive daily / yearly patterns
Semantic description: think of image / video generation from prompts of 10 to 100 words
Putting it all together:
Low end:
High end:
This is the consciously accessed data stream only, which is why it is much smaller than the full human brain.
“But the full latent input-output capabilities of human brain can be obtained by training the brain on its experience!” Yes, and that training makes use of data not consciously accessed, which I believe is much bigger than the consciously-accessed data stream.
Kolmogorov complexity of a human baby’s brain
A baby hasn’t begun learning, so I’ll assume that the human genome is a sufficient description of a baby’s brain.
Kolmogorov complexity
Kolmogorov complexity of any generic human-level observer
I really have no idea. The space of mind designs is huge; there are likely some very compact designs. to bits, maybe?
I think that this should be in a framework that takes into account granularity in some sense. Like I assume you are thinking about the kolmogorov complexity of simulating a system observationally similar to
Case 1: This is either A: A generic, realistic looking adult brain (hard to estimate the complexity), vs B: the brain of an individual person (~amount of synapses).
When you say claude opus is more intelligent than haiku, in case B it would definitely have more complexity, but in case A:
---If opus and Haiku were trained on the same dataset they would have almost the same complexity except for the num_parameters in the code. (Opus would have 1-2 bits more)
---More interestingly, it could be that Opus seems to have more geenral intelligence, rather than just knowledge because the architecture is more expressive and can learn an underlying algorithm that is a little bigger, but if you simulated Opus and Haiku with a more advanced architecture, maybe both would have the same “raw intelligence”. This is related to the texture vs shape bias in image classifiers. Models above 1B start recognizing stuff by shape rather than texture, which seems more like a simple algorithmic improvement rather than something that fundamentlaly required a higher parameter count.
Case 2: your subjective experience (could be compiled into a list of brain activations and sensory data)
Case 3: a generic human baby brain.
Case 4: Step 1: take laws of physics + a PRNG, simulate a universe, presumably it will have some intelligences. Step 2: build an “intelligence detector”, something that can detect human-like civilizations, e.g by finding complex radio emissions, then seek the brains somehow. This likely fits in less than 1MB.