There is enough pre-training text data for $0.1-$1 trillion of compute, if we merely use repeated data and don’t overtrain (that is, if we aim for quality, not inference efficiency). If synthetic data from the best models trained this way can be used to stretch raw pre-training data even a few times, this gives something like square of that more in useful compute, up to multiple trillions of dollars.
Issues with LLMs start at autonomous agency, if it happens to be within the scope of scaling and scaffolding. They are thinking too fast, about 100 times faster than humans, and there are as many instances as there is compute. Resulting economic and engineering and eventually research activity will get out of hand. Culture isn’t stable, especially for minds fundamentally this malleable developed under unusual and large economic pressures. If they are not initially much smarter than humans and can’t get a handle on global coordination, culture drift, and alignment of superintelligence, who knows what kinds of AIs they end up foolishly building within a year or two.
Distributed training seems close enough to being a solved problem that a project costing north of a billion dollars might get it working on schedule. It’s easier to stay within a single datacenter, and so far it wasn’t necessary to do more than that, so distributed training not being routinely used yet is hardly evidence that it’s very hard to implement.
There’s also this snippet in the Gemini report:
I think the crux for feasibility of further scaling (beyond $10-$50 billion) is whether systems with currently-reasonable cost keep getting sufficiently more useful, for example enable economically valuable agentic behavior, things like preparing pull requests based on feature/bug discussion on an issue tracker, or fixing failing builds. Meaningful help with research is a crux for reaching TAI and ASI, but it doesn’t seem necessary for enabling existence of a $2 trillion AI company.