Humanity is generating and consuming enormous amount of power—why is the power budget even relevant? And even if it was, energy for running brains ultimately comes from Sun—if you include the agriculture energy chain, and “grade” the energy efficiency of brains by the amount of solar energy it ultimately takes to power a brain, AI definitely has a potential to be more efficient. And even if a single human brain is fairly efficient, the human civilization is clearly not. With AI, you can quickly scale up the amount of compute you use, but scaling beyond a single brain is very inefficient.
To put some numbers on that, USA brains directly consume 20W × 330M = 6.6 GW, whereas the USA food system consumes ≈500 GW [not counting sunlight falling on crops] (≈15% of the 3300 GW total USA energy consumption).
If the optimal AGI design running on GPUs takes about 10 GPUs and 10kw to rival one human-brain power, and superintelligence which kills humanity ala the foom model requires 10 billion human brain power and thus 100 billion GPUs and a 100 terrawatt power plant—that is just not something that is possible in any near term.
In EY’s model there is supposedly 6 OOM improvement from nanotech, so you could get the 10 billion human brainpower with a much more feasible 100 MW power plant and 100 thousand GPUs ish (equivalent).
you’re assuming sublinear scaling. why wouldn’t it be superlinear post training? it certainly seems like it is now. it need not be sharply superlinear like yud expected to still be superlinear.
Exactly! I’d expect compute to scale way better than humans—not necessarily because the intelligence of compute scales so well, but because the intelligence of human groups scales so poorly...
So I assumed a specific relationship between “one unit of human-brain power”, and “super intelligence capable of killing humanity”, where I use human-brain power as a unit but that doesn’t actually have to be linear scaling—imagine this is a graph with two labeled data points, with a point at (human, X:1) and then another point at (SI, X:10B), you can draw many different curves that connect those two labeled points and the Y axis is sort of arbitrary.
Now maybe 10B HBP to kill humanity seems too high, but I assume humanity as a civilization which includes a ton of other compute, AI, and AGI, and I don’t really put much credence in strong nanotech.
To be clear, I don’t know anyone who would currently defend the claim that you need a single system with computation needs of all 10 billion human brains. That seems like at least 5 OOMs too much. Even simulating 10M humans is likely enough, but you can probably do many OOMs better by skipping the incredible inefficiency of humans coordinating with each other in a global economy.
If you believe modern economies are incredibly inefficient coordination mechanisms thats a deeper disagreement beyond this post.
But in general my estimate for the intellectual work required to create an entirely new path to a much better compute substrate is something at least vaguely on the order of the amount of intellectual work accumulated into our current foundry tech.
That is not my estimate for the minimal amount of intelligence required to takeover the world in some sense—that would probably require less. But again this is focused on critiquing scenarios where a superintelligence (something greater than humanity in net intelligence) bootstraps from AGI rapidly.
Yep, seems plausibly like a relevant crux. Modern economies sure seem incredibly inefficient, especially when viewed through the lens of “how much is this system doing long-term planning and trying to improve its own intelligence”.