How the AI Labs Make Profit (Maybe, Eventually)

I wrote this essay as a submission to Dwarkesh Patel’s blog prize, though I have been meaning to write this up for a while.

Usually, for a company to become profitable, they need to increase revenue, decrease costs, or some mixture of the two. For AI companies in their current form, I think there is a third way they can become profitable that looks like increasing revenue but is distinct from what they are currently doing. Namely, internal deployment where they spin up internal companies.

First, the AI companies currently aren’t facing a lot of pressure to become profitable. That’s partially the reason that OpenAI and Anthropic are the first companies to reach ~$900B valuation and be cash flow negative. They’ve had the luxury of not being profitable and focusing on growth because the market has been willing to fund their growth. This allows for ideologies within the companies to remain that eventually might not continue to fly, like “we are going post-economic, money won’t matter” or “we will build the machine god and ask it to make money”. But eventually, companies will be forced to become profitable. There is only about ~another round of capital left where the companies can remain unprofitable. Perhaps OpenAI/​Anthropic could raise $250-500B at a $1.5-2.5T valuation, but it seems very unlikely that they could raise $1T+ at a $4T+ valuation.

It’s fairly hard to imagine AI labs doing much to cut costs to become profitable. They could prioritize developing and releasing smaller models, but it seems difficult to stay in the race without pushing the frontier. They could try to cut their research costs, but these are likely to increase as demand for larger and more intelligent models continues. With company ambitions and investor desires, it doesn’t seem like cutting will be the chosen method.

It is more plausible that the labs could increase their revenues by charging more. Many individual users are paying $2000/​year/​company, and some enterprises are likely paying $100M+/​year. Some users would be willing to spend 10-100x. But price discrimination will be hard to determine for these users, and switching costs are low. It’s conceivable that a company can get ahead of others and charge a premium for its intelligence even if only on certain domains, but while there are theoretical arguments for this, it hasn’t happened yet, especially for any extended period. Overall, the main issue for AI companies being able to significantly increase revenues is that open source competitors can distill models and catch up to the frontier in 6-12 months. Also, competitors like Cursor serve frontier models and are able to collect data from the users on what patches users prefer, and train their models on that data, further disadvantaging the frontier companies. I’ve done some rough modelling, beyond this post, and I think it’s unlikely that companies are going to be able to monetize their models in this short amount of time to make their models profitable, especially as training costs keep increasing. It has also been suggested that perhaps companies will stop charging per token but charge for intelligence. But it’s hard to know how much the tokens are worth, and this is essentially just charging more for better models, which often won’t be worth it, and firms will prefer to pay much less for slightly less intelligence.

This leads to the final possibility that AI companies will begin to keep their models in-house and use them themselves to make a profit. This might take the form of partnerships with other firms, or the companies themselves will build companies within the company.

There are many industries with very large revenues that could benefit immensely from LLMs. I’ll briefly talk about quantitative trading, but the pharmaceutical industry and others can conceivably make great use of LLMs and other AI models.

Trading firms make a lot of money. Some firms make as much as $50B in net trading revenue per year; the industry earns ~$200B in net trading revenue. A lot of employees at AI firms come from trading firms, and there is thus a very natural fit. Certain trading strategies, like sentiment analysis on presswires or analysis of earnings reports, that already benefit a lot from using LLMs could become strategies that will be dominated not by traditional trading firms but the trading firms within AI companies.

It’s worth considering just how much more valuable this could be to the companies than releasing their models to the public. In trading and in other domains, the total value of alpha/​edge is inversely proportional to the number of firms that have this edge. This is more radical than it first appears. It is not merely that if the number of entities that have a certain edge increases from one to two, each will get some fraction of the original edge. But rather, the total amount that the edge is worth goes down, and then it is split between the entities. In the context of AI companies, not only does this mean that the intelligence might be worth more if kept internally, in total, but also that they don’t need to share any of the value with the company that would be using the API.

There is already some evidence that this is happening. AI companies have internal models that they are using to develop the next generations, and they are keeping them longer internally before release, other than just safety testing. There are rumours that SSI is trading internally, labs are already working with trading firms, and Anthropic acquired Coefficient Bio, a company that could plausibly help them do AI-led drug discovery.

I think altogether, it is most likely that companies begin to make revenue from internal deployment, and there are a lot of incentives that push them in this direction. I think this has a lot of implications, particularly for those who are concerned about potential risks from AI systems. Namely, that a lot of the focus should be on internal deployment.

Credit: Ideas are my own, but two examples came from conversations with Ege Erdil.