So I read another take on OpenAI’s finances and was wondering, does anyone know why Altman is doing such a gamble on collecting enormous investments into new models in the hopes that they’ll get sufficiently insane profits to make it worthwhile? Even ignoring the concerns around alignment etc., there’s still the straightforward issue of “maybe the models are good and work fine but aren’t good enough to pay back the investment”.
Even if you did expect scaling to probably bring in huge profits, naively it’d still be wiser to pick a growth strategy that didn’t require your company to become literally the most profitable company in the history of all companies or go bankrupt.
The obvious answer is something like “he believes they’re on the way to ASI and whoever gets there first, wins the game”, but I’m not sure if it makes sense even under that assumption—his strategy requires not only getting to ASI first, but never once faltering on the path there. Even if ASI is really imminent but it only takes like two years longer than he expected, that alone might be enough that OpenAI is done for. He could have raised much more conservative investment and still been in the game—especially since much of the current arms race is plausibly a response to the sums OpenAI has been raising.
According to an external report last year, OpenAI was projected to burn through $8 billion in 2025, rising to $40 billion in 2028. Given that the company reportedly predicts profitability by 2030, it’s not hard to do the math.
Altman’s venture projects spending $1.4 trillion on datacenters. As Sebastian Mallaby, an economist at the Council on Foreign Relations, notes, even if OpenAI rethinks those limerence-influenced promises and “pays for others with its overvalued shares”, there’s still a financial chasm to cross. Mallaby isn’t the only one thinking along these lines, as Bain & Company reported last year that, even with the best outlook, there’s at least a $800 billion black hole in the industry.
There are no details for most of these commitments. They are likely flexible enough to get scaled back if growth isn’t there, as most of this capex is for the projects that remain years in the future. There also might be some double counting, such as with Nvidia’s commitment, which might just apply to Stargate compute covered by Oracle, indirectly letting OpenAI pay for some of Oracle’s compute in stock. But since growth might end up real, it’s useful to plan for that possibility in advance and get reservations for all the things that need to happen, which might be the main purpose of these commitments.
It’s not about scaling, inference already wants 1-2 GW of compute per AI company, just from the number of users. Apparently it’s feasible to maintain margins around 50%, and so about the same amount of research/training compute (in addition to inference compute) can also be sustainably financed. Cost of 4 GW of compute (inference plus research/training) is about $50bn per year, but as upfront capex it’s about $200bn. And it mostly physically doesn’t exist yet (especially in the form of the newer rack-scale servers needed to efficiently serve larger models), hence it’s being built in a hurry, with all these hundreds of billions of dollars concentrated into less time than what the current economics would be asking for in the long term (per year).
So even if further growth doesn’t materialize, current spending only appears greater than it should be (because capex is front-loaded and currently catching up to the demand for compute), while actually it isn’t greater than would be sustainable (after the buildout for the current levels of demand is done and proceeds more slowly in the future, and all the future stretch commitments are scaled down). The plans for future spending are beyond what the current economics would sustain, but they need to be made in case growth continues, and they probably still can be gracefully canceled if it doesn’t, depending on the non-public legal details of what all these commitments actually say.
Top of my head—he’s hoping OpenAI, and the AI boom in general, is too big to fail. In 18 months, Trump will still be in office. The only aspect of the economy he’s rated as a “success” on is increasing stock prices. Those have been driven by AI. He’s openly trying to force the Fed to reinstate ZIRP. He could force a bailout of OpenAI (and others) on national security grounds. These would be considerations I’d fully expect Altman and others to have consciously considered, planned for, and even discussed with Trump and his advisors.
Altman doesn’t own equity in OpenAI and he’s doing it for the glory. He genuinely believes he might give birth to the AI god. Why should he do anything different, from his vantage point
Why should he do anything different, from his vantage point
Because the AI god might happen in 2033 or something, after OpenAI mostly or fully goes out of business (following the more bubble-pilled takes), in which case only the more conservative path of Anthropic or the more balance sheet backstopped path of GDM let these AI companies compete at that time.
So it’s insufficient to only care about the AI god, it’s also necessary to expect it on a specific timeline. Or, as I argue in a sibling comment, this behavior plausibly isn’t actually as risky (for the business) as it looks.
Of course being right is better than being wrong. Ideally he should know the exact date of the arrival of the Superintelligence and organize finances for that.
But it seems to me that he has the best shot of creating the AI god with his current process.
Even if you did expect scaling to probably bring in huge profits, naively it’d still be wiser to pick a growth strategy that didn’t require your company to become literally the most profitable company in the history of all companies or go bankrupt.
I mean, that depends on your goals.
I’m uninformed about the specifics of this situation, but I think that taking all-or-nothing gambles like this is evidence that someone is playing for unprecedented personal power, rather than standard capitalist mega-wealth.
So I read another take on OpenAI’s finances and was wondering, does anyone know why Altman is doing such a gamble on collecting enormous investments into new models in the hopes that they’ll get sufficiently insane profits to make it worthwhile? Even ignoring the concerns around alignment etc., there’s still the straightforward issue of “maybe the models are good and work fine but aren’t good enough to pay back the investment”.
Even if you did expect scaling to probably bring in huge profits, naively it’d still be wiser to pick a growth strategy that didn’t require your company to become literally the most profitable company in the history of all companies or go bankrupt.
The obvious answer is something like “he believes they’re on the way to ASI and whoever gets there first, wins the game”, but I’m not sure if it makes sense even under that assumption—his strategy requires not only getting to ASI first, but never once faltering on the path there. Even if ASI is really imminent but it only takes like two years longer than he expected, that alone might be enough that OpenAI is done for. He could have raised much more conservative investment and still been in the game—especially since much of the current arms race is plausibly a response to the sums OpenAI has been raising.
There are no details for most of these commitments. They are likely flexible enough to get scaled back if growth isn’t there, as most of this capex is for the projects that remain years in the future. There also might be some double counting, such as with Nvidia’s commitment, which might just apply to Stargate compute covered by Oracle, indirectly letting OpenAI pay for some of Oracle’s compute in stock. But since growth might end up real, it’s useful to plan for that possibility in advance and get reservations for all the things that need to happen, which might be the main purpose of these commitments.
It’s not about scaling, inference already wants 1-2 GW of compute per AI company, just from the number of users. Apparently it’s feasible to maintain margins around 50%, and so about the same amount of research/training compute (in addition to inference compute) can also be sustainably financed. Cost of 4 GW of compute (inference plus research/training) is about $50bn per year, but as upfront capex it’s about $200bn. And it mostly physically doesn’t exist yet (especially in the form of the newer rack-scale servers needed to efficiently serve larger models), hence it’s being built in a hurry, with all these hundreds of billions of dollars concentrated into less time than what the current economics would be asking for in the long term (per year).
So even if further growth doesn’t materialize, current spending only appears greater than it should be (because capex is front-loaded and currently catching up to the demand for compute), while actually it isn’t greater than would be sustainable (after the buildout for the current levels of demand is done and proceeds more slowly in the future, and all the future stretch commitments are scaled down). The plans for future spending are beyond what the current economics would sustain, but they need to be made in case growth continues, and they probably still can be gracefully canceled if it doesn’t, depending on the non-public legal details of what all these commitments actually say.
Top of my head—he’s hoping OpenAI, and the AI boom in general, is too big to fail. In 18 months, Trump will still be in office. The only aspect of the economy he’s rated as a “success” on is increasing stock prices. Those have been driven by AI. He’s openly trying to force the Fed to reinstate ZIRP. He could force a bailout of OpenAI (and others) on national security grounds. These would be considerations I’d fully expect Altman and others to have consciously considered, planned for, and even discussed with Trump and his advisors.
In fact, OpenAI’s CFO has already floated the idea of a government “backstop” (bailout).
https://www.wsj.com/video/openai-cfo-would-support-federal-backstop-for-chip-investments/4F6C864C-7332-448B-A9B4-66C321E60FE7
Altman doesn’t own equity in OpenAI and he’s doing it for the glory. He genuinely believes he might give birth to the AI god. Why should he do anything different, from his vantage point
Because the AI god might happen in 2033 or something, after OpenAI mostly or fully goes out of business (following the more bubble-pilled takes), in which case only the more conservative path of Anthropic or the more balance sheet backstopped path of GDM let these AI companies compete at that time.
So it’s insufficient to only care about the AI god, it’s also necessary to expect it on a specific timeline. Or, as I argue in a sibling comment, this behavior plausibly isn’t actually as risky (for the business) as it looks.
Of course being right is better than being wrong. Ideally he should know the exact date of the arrival of the Superintelligence and organize finances for that.
But it seems to me that he has the best shot of creating the AI god with his current process.
I mean, that depends on your goals.
I’m uninformed about the specifics of this situation, but I think that taking all-or-nothing gambles like this is evidence that someone is playing for unprecedented personal power, rather than standard capitalist mega-wealth.
The pattern you’re naming seems common amongst grandiose narcissism. We tend to see the many coins flipped heads in a row cases.