Attempting to estimate AGI compute requirements from visual cortex and image classification has a long connectionist history. Moravec did this repeatedly, and Drexler has another version in his QNR whitepaper. AI Impacts or whoever was comparing to bees for similar reasons. Might be worth comparing.
For everybody who didn’t know—like me—what QNR are:
Learned, quasilinguistic neural representations (QNRs) that upgrade words to embeddings and syntax to graphs can provide a semantic medium that is both more expressive and more computationally tractable than natural language, a medium able to support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both literatures and associative memory. QNR-based systems can draw on existing natural language and multimodal corpora to support the aggregation, refinement, integration, extension, and application of knowledge at scale. The incremental development of QNR-based models can build on current capabilities and methodologies in neural machine learning, and as systems mature, could potentially complement or replace today’s opaque “foundation models” with systems that are more capable, interpretable, and epistemically reliable. Potential applications and implications are broad.
The bees post was by Guilhermo Costa, an Open Phil intern. My comment has some discussion of the “but biological brains do so much more stuff than ML classifiers” point.
Assuming the visual cortex (and possibly the optic nerve itself) is as computationally intensive as the retina, successive layers producing increasingly abstracted representations, we can estimate the total capability. There are a million separate fibers in a cross section of the human optic nerve. The thickness of the optical cortex is a thousand times the depth occupied by the neurons which apply a single simple operation. The eye is capable of processing images at the rate of ten per second (flicker at higher frequencies is detected by special operators). This means that the human visual system evaluates 10,000 million pixel simple operators each second.
Attempting to estimate AGI compute requirements from visual cortex and image classification has a long connectionist history. Moravec did this repeatedly, and Drexler has another version in his QNR whitepaper. AI Impacts or whoever was comparing to bees for similar reasons. Might be worth comparing.
For everybody who didn’t know—like me—what QNR are:
Drexler’s Language for Intelligent Machines: A Prospectus
on LW: QNR prospects are important for AI alignment research
The bees post was by Guilhermo Costa, an Open Phil intern. My comment has some discussion of the “but biological brains do so much more stuff than ML classifiers” point.
This is the 1976 Moravec calculation:
https://frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html