Intuition for ‘faster’ seems more straightforward to justify because, in general, there will be a higher volume of technology available as time progresses, while brain emulation requirements are constant. I think it’s interesting to focus on what could cause a slower scenario.
It’s possible that, without complete digitization of a sufficiently complex animal brain, further progress will be intractable for human intelligence and, by extension, very advanced future LLMs. For example, there may be many supposed breakthroughs in implementing continual learning in neural networks or high sample efficiency learning, but for some reason it will not be possible to glue all those things together, and the critical insights to do so will seem very hard or almost impossible to invent.
It may be the case that scaling AGI like intelligence is like trying to increase velocity in a fluid. It’s more complex than just a quadratic increase in drag. The type of flow changes above supersonic speeds, and there will be multiple supersonic like transitions that are incredibly complex to understand.
The radical difference in human intellectual capability from a supposedly relatively identical substrate seems contradictory at first. It’s possible that, for some evolutionary reason, the ability to form complex circuits is stunted in non-anomalous brains.
Digitization of a sufficiently complex brain at enough resolution may not be possible without nanotechnology that can monitor a living brain in real time across its entire volume. There might be approaches to grow flat brains with scaled up features synthetically, but it may not be possible to train such a brain for reasons that are hard to understand or even speculate about now. Nanotechnology at this level may not be possible to develop without ASI.
It’s possible that the brain requires much more compute than a naive estimate based on synaptic firing frequency suggests.
I have often heard that what a single neuron does is extremely complex. On the other hand, the frequency of synaptic firing suggests there isn’t much data transmitted in total. This is relatively hard for me: on one hand, Hans Moravec style estimates: computing capacity in the retina multiplied by brain volume make sense; on the other hand, outside the retina, at a whole brain level, some sort of data augmentation may be happening that actually consumes 99.9% of the compute, and in those processes very complex in neuron operations are used.
It may not be possible to design and manufacture high volume, brain like programmable substrates without ASI, and ASI may not be achievable without them, or it may be extremely hard and require multiple terawatts of compute because of 2.
The current LLM takeoff suggests that intelligence is relatively “simple” to solve, but in fact this type of text based pattern processing could be OOMs more efficient than animal like intelligence due to the pattern compressing function of language. If bootstrapping LLMs to AGI fails, it could be really hard to find a paradigm that gets closer. A new paradigm may get closer but still turn out to be relatively limited. This situation could repeat many times without obvious solutions on the horizon.
For now, I think that’s enough. I need to think about this more.
Intuition for ‘faster’ seems more straightforward to justify because, in general, there will be a higher volume of technology available as time progresses, while brain emulation requirements are constant. I think it’s interesting to focus on what could cause a slower scenario.
It’s possible that, without complete digitization of a sufficiently complex animal brain, further progress will be intractable for human intelligence and, by extension, very advanced future LLMs. For example, there may be many supposed breakthroughs in implementing continual learning in neural networks or high sample efficiency learning, but for some reason it will not be possible to glue all those things together, and the critical insights to do so will seem very hard or almost impossible to invent.
It may be the case that scaling AGI like intelligence is like trying to increase velocity in a fluid. It’s more complex than just a quadratic increase in drag. The type of flow changes above supersonic speeds, and there will be multiple supersonic like transitions that are incredibly complex to understand.
The radical difference in human intellectual capability from a supposedly relatively identical substrate seems contradictory at first. It’s possible that, for some evolutionary reason, the ability to form complex circuits is stunted in non-anomalous brains.
Digitization of a sufficiently complex brain at enough resolution may not be possible without nanotechnology that can monitor a living brain in real time across its entire volume. There might be approaches to grow flat brains with scaled up features synthetically, but it may not be possible to train such a brain for reasons that are hard to understand or even speculate about now. Nanotechnology at this level may not be possible to develop without ASI.
It’s possible that the brain requires much more compute than a naive estimate based on synaptic firing frequency suggests.
I have often heard that what a single neuron does is extremely complex. On the other hand, the frequency of synaptic firing suggests there isn’t much data transmitted in total. This is relatively hard for me: on one hand, Hans Moravec style estimates: computing capacity in the retina multiplied by brain volume make sense; on the other hand, outside the retina, at a whole brain level, some sort of data augmentation may be happening that actually consumes 99.9% of the compute, and in those processes very complex in neuron operations are used.
It may not be possible to design and manufacture high volume, brain like programmable substrates without ASI, and ASI may not be achievable without them, or it may be extremely hard and require multiple terawatts of compute because of 2.
The current LLM takeoff suggests that intelligence is relatively “simple” to solve, but in fact this type of text based pattern processing could be OOMs more efficient than animal like intelligence due to the pattern compressing function of language. If bootstrapping LLMs to AGI fails, it could be really hard to find a paradigm that gets closer. A new paradigm may get closer but still turn out to be relatively limited. This situation could repeat many times without obvious solutions on the horizon.
For now, I think that’s enough. I need to think about this more.