Here’s a couple of my recent relevantposts (both slightly outdated, in particular see this comment, and the note on Gemini 2 Ultra in another comment under this quick take). Though in this quick take, I’m mostly discussing total params count and HBM capacity per scale-up world, not compute, how it’s constraining 2025 AIs beyond compute (so that even 2024 compute fails to find efficient use), and how in 2026 these constraints become less strict.
A post going over how much compute each frontier AI lab has will likely be very helpful.
Here’s a couple of my recent relevant posts (both slightly outdated, in particular see this comment, and the note on Gemini 2 Ultra in another comment under this quick take). Though in this quick take, I’m mostly discussing total params count and HBM capacity per scale-up world, not compute, how it’s constraining 2025 AIs beyond compute (so that even 2024 compute fails to find efficient use), and how in 2026 these constraints become less strict.