The previous year of insane AI codegen stuff going on everywhere and the continued steady progress in AI has made it more intuitive to people that there won’t be a lot of “money on the table” for some nascent AGI to eat up, because it will enter a teeming ecosystem of AI systems and humans and their interactions. For example, though there are technically some self-sustaining AIs paying for their server costs, they struggle to compete with purposeful human+AI entities that deliberately try to steal the customers of the AI-only businesses if they ever get too many. The cyber competition is also increasingly tough, meaning that any single rogue AI would have a rough time defeating the rest of the world.
I don’t think this is how it works. “there’s plenty of room at the top,” I say—if we haven’t hit AGI yet, there will still be money on the table for smarter AI systems to eat up. Lots of it. Also, the ability of rogue AIs to defeat the rest of the world doesn’t depend much on how much cyber competition there is, because it doesn’t depend much on cyber at all.
However, no evidence by the end of 2027 has ruled out a sharper takeoff, and those who believe in it are increasingly either frantic and panicking, or then stoically equanimous and resigned, expecting the final long-term agentic planning piece to slot into place at any moment and doom the world. Also, the labs are openly talking about recursive self-improvement as their strategy
...
Anthropic’s initial recursive self-improvement efforts allow them to create superhuman coding and maths and AI research AIs in 2028. However, the economics of the self-improvement curve are not particularly favourable, in particular because the AI-driven AI research is bottlenecked by compute-intensive experiments. It also seems like the automated Claude Epic researchers, while vastly superhuman at any short-horizon task, don’t seem vastly superhuman at “research taste”. This is expected to change with enough long-horizon RL training, and with greater AI-to-AI “cultural” learning from each other, as countless AI instances build up a body of knowledge about which methods and avenues work.
OK, seems like we have a major disagreement about takeoff speeds here! Could you elaborate on your view here? I agree that AI-driven AI research will be bottlenecked by compute-intensive experiments; my own model/calculations nevertheless suggest superintelligence will be achieved in less than a year from the point you describe as happening in 2027-2028. I’ll try to put up some blog posts soon...
I don’t think this is how it works. “there’s plenty of room at the top,” I say—if we haven’t hit AGI yet, there will still be money on the table for smarter AI systems to eat up. Lots of it. Also, the ability of rogue AIs to defeat the rest of the world doesn’t depend much on how much cyber competition there is, because it doesn’t depend much on cyber at all.
...
OK, seems like we have a major disagreement about takeoff speeds here! Could you elaborate on your view here? I agree that AI-driven AI research will be bottlenecked by compute-intensive experiments; my own model/calculations nevertheless suggest superintelligence will be achieved in less than a year from the point you describe as happening in 2027-2028. I’ll try to put up some blog posts soon...