Instead, my guess (based largely on lots of opinions about exactly what computations the human brain is doing and how) is that human-level human-speed AGI will require not a data center, but rather something like one consumer gaming GPU—and not just for inference, but even for training from scratch.
If this is right, then it seems like AI governance is completely and resoundingly fucked, and we’re back to the pre-2021 MIRI paradigm of thinking that we need to solve alignment before AGI is invented.
If this is right, then it seems like AI governance is completely and resoundingly fucked, and we’re back to the pre-2021 MIRI paradigm of thinking that we need to solve alignment before AGI is invented.
“completely and resoundingly fucked” is mildly overstated but mostly “Yes, that’s my position”, see §1.6.1, 1.6.2, 1.8.4.