There are other ways to optimize the brain, such as improving axonal transmission speed from the current range 0.5 − 10 meters/sec to more like the speed of electricity through wires ~250,000,000 meters per second.
I agree this is the main obvious improvement of digital minds, and speculated on some implications here a decade ago. But if it requires even just 1KW of energy flowing through GPUs to match one human brain then using all of current world power output towards GPUs would still not produce more equivalent brain power than humanity (world power output ~ 4TW, and GPU production would have to increase by OOM).
You could use all of world energy output to have a few billion human speed AGI, or a millions that think 1000x faster, etc.
You could use all of world energy output to have a few billion human speed AGI, or a millions that think 1000x faster, etc.
Isn’t it insanely transformative to have millions of human-level AIs which think 1000x faster?? The difference between top scientists and average humans seems to be something like “software” (Einstein isn’t using 2x the watts or neurons). So then it should be totally possible for each of the “millions of human-level AIs” to be equivalent to Einstein. Couldn’t a million Einstein-level scientists running at 1000x speed could beat all human scientists combined? And, taking this further, it seems that some humans are at least 100x more productive at science than others, despite the same brain constraints. Then why shouldn’t it be possible to go further in that direction, and have someone 100x more productive than Einstein at the same flops? And if this is possible, it seems to me like whatever efficiency constraints the brain is achieving cannot be a barrier to foom, just as the energy efficiency (and supposed learning optimality?) of the average human brain does not rule out Einstein more than 100x-ing them with the same flops.
Of course, my argument doesn’t pin down the nature or rate of software-driven takeoff, or whether there is some ceiling. Just that the “efficiency” arguments don’t seem to rule it out, and that there’s no reason to believe that science-per-flop has a ceiling near the level of top humans.
The whole “compute greater than humanity” thing does not seem like a useful metric. It’s just completely not necessary to exceed total human compute to dis-empower humans. We parallelize extremely poorly. And given how recent human civilization at this scale is and how adversarial humans are towards each other, it would be surprising if we used our collective compute in even a remotely efficient way. Not to mention the bandwidth limitations.
The summed compute of conquistador brains was much less than those they dis-empowered. The summed compute of slaughterhouse worker brains is vastly less than that of the chickens they slaughter in a single month!
I don’t think this point deserves any special salience at all.
In your view, is it possible to make something which is superhuman (i.e. scaled beyond human level), if you are willing to spend a lot on energy, compute, engineering cost, etc?
I agree this is the main obvious improvement of digital minds, and speculated on some implications here a decade ago. But if it requires even just 1KW of energy flowing through GPUs to match one human brain then using all of current world power output towards GPUs would still not produce more equivalent brain power than humanity (world power output ~ 4TW, and GPU production would have to increase by OOM).
You could use all of world energy output to have a few billion human speed AGI, or a millions that think 1000x faster, etc.
Isn’t it insanely transformative to have millions of human-level AIs which think 1000x faster?? The difference between top scientists and average humans seems to be something like “software” (Einstein isn’t using 2x the watts or neurons). So then it should be totally possible for each of the “millions of human-level AIs” to be equivalent to Einstein. Couldn’t a million Einstein-level scientists running at 1000x speed could beat all human scientists combined?
And, taking this further, it seems that some humans are at least 100x more productive at science than others, despite the same brain constraints. Then why shouldn’t it be possible to go further in that direction, and have someone 100x more productive than Einstein at the same flops? And if this is possible, it seems to me like whatever efficiency constraints the brain is achieving cannot be a barrier to foom, just as the energy efficiency (and supposed learning optimality?) of the average human brain does not rule out Einstein more than 100x-ing them with the same flops.
Yes it will be transformative.
GPT models already think 1000x to 10000x faster—but only for the learning stage (absorbing knowledge), not for inference (thinking new thoughts).
Of course, my argument doesn’t pin down the nature or rate of software-driven takeoff, or whether there is some ceiling. Just that the “efficiency” arguments don’t seem to rule it out, and that there’s no reason to believe that science-per-flop has a ceiling near the level of top humans.
The whole “compute greater than humanity” thing does not seem like a useful metric. It’s just completely not necessary to exceed total human compute to dis-empower humans. We parallelize extremely poorly. And given how recent human civilization at this scale is and how adversarial humans are towards each other, it would be surprising if we used our collective compute in even a remotely efficient way. Not to mention the bandwidth limitations.
The summed compute of conquistador brains was much less than those they dis-empowered. The summed compute of slaughterhouse worker brains is vastly less than that of the chickens they slaughter in a single month!
I don’t think this point deserves any special salience at all.
In your view, is it possible to make something which is superhuman (i.e. scaled beyond human level), if you are willing to spend a lot on energy, compute, engineering cost, etc?