AGI is a technical milestone, so I don’t see what you are gesturing at with vibes as arguments about AI company advantages. I think 100x advantage in compute remains crucial, 10x advantage matters as much as technical brilliance, and 3x advantage doesn’t matter. Better chips are also not strictly needed for larger scale training, because critical batch size scales fast enough for LLMs, they are merely cheaper (though by multiple times, and in both cost and power). Good chips do help enormously with inference.
So the issue for SSI/DeepSeek/Mistral is that they plausibly remain 10x behind in compute by 2026, while Google retains compute advantage even without managing to make a most capable model so far (rather than cheapest-for-its-capabilities).
With Stargate, there is only Abilene site and relatively concrete prospect for maybe $40bn so far, enough to build a 1 GW Blackwell training system (4e27 FLOPs models) in 2025-2026, the same scale as was announced by Musk this week. Anthropic compute for 2026 remains opaque (“a million of some kind of chip”), Google probably has the most in principle, but with unclear willingness to spend. Meta didn’t say anything to indicate that its Richland Parish site will see 1 GW of Blackwells in 2025-2026, it remains a vague 2 GW by 2030 thing.
AGI is a technical milestone, so I don’t see what you are gesturing at with vibes as arguments about AI company advantages. I think 100x advantage in compute remains crucial, 10x advantage matters as much as technical brilliance, and 3x advantage doesn’t matter. Better chips are also not strictly needed for larger scale training, because critical batch size scales fast enough for LLMs, they are merely cheaper (though by multiple times, and in both cost and power). Good chips do help enormously with inference.
So the issue for SSI/DeepSeek/Mistral is that they plausibly remain 10x behind in compute by 2026, while Google retains compute advantage even without managing to make a most capable model so far (rather than cheapest-for-its-capabilities).
How large of an advantage do you think OA gets relative to its competitors from Stargate?
With Stargate, there is only Abilene site and relatively concrete prospect for maybe $40bn so far, enough to build a 1 GW Blackwell training system (4e27 FLOPs models) in 2025-2026, the same scale as was announced by Musk this week. Anthropic compute for 2026 remains opaque (“a million of some kind of chip”), Google probably has the most in principle, but with unclear willingness to spend. Meta didn’t say anything to indicate that its Richland Parish site will see 1 GW of Blackwells in 2025-2026, it remains a vague 2 GW by 2030 thing.