“OpenAI has told investors it expects to reach $125B in revenue by 2029 and $174B by 2030. OpenAI expects to serve 3B monthly active users by 2030, with 900M DAUs.” (from sacra.com; original source seems to be theinformation.com, but it is behind a paywall).
FAANG performance in 2024 (source: ChatGPT):
Apple — $391B net sales
Amazon — $638B net sales
Alphabet (Google) — $350B revenue
Meta Platforms — $164.5B revenue
Netflix — $39B billion revenue
Therefore, OpenAI expects to reach revenue comparable to Meta—or roughly half of Alphabet’s—within five years.
YC’s request for startups suggests existence of capable AI agents replacing entire industries: “Instead of selling to the dinosaurs, you could make them extinct.”
3 & 4 contradict each other. I am surprised why the projection is so low assuming there are AI agents replacing entire industries.
My hypotheses:
The AI progress is expected to slow down and capable AI agents are not coming in the next 5 years, thus, existing industries are not replaced by AI agents. Basically in (4) YC is shooting far into the future.
Agents are coming, but there will be multiple providers and OpenAI captures only a small part of this market.
Another version of this—OpenAI only captures API infrastructure part of it, but the rest is captured by smaller companies applying OpenAI’s API in their own niches.
Capable agents are coming soon & OpenAI will be the main provider, but OpenAI’s revenue is growing exponentially and it just takes time to overtake rest of FAANG. However, based on ChatGPT research, it took FAANG companies 5-10 years to get to a first year with $1B+ revenue & for OpenAI it was 8-9 years, so nothing substantially different here.
If I have understood correctly, you’re saying that OpenAI should be forecasting greater revenue than this, if they truly think they will have AIs capable of replacing entire industries. But maybe they’re just being cautious in their forecasts?
Suppose I have a 3d printing / nanotechnology company, and I think that a year from now I’ll have an unlimited supply of infinity boxes capable of making any material artefact. World manufacturing is worth over US$10 trillion. If I thought I could put it all out of business, by charging just 10% of what current manufacturers charge, I could claim expected revenue of $1 trillion.
Such a prediction would certainly be attention-grabbing, but maybe it would be reckless to make it? Maybe my technology won’t be ready. Maybe my products will be blocked from most markets. Maybe someone will reverse-engineer and open-source the infinity boxes, and prices will crash to $0. Maybe I don’t want the competition or the government to grasp just how big my plans are. Maybe the investors I want wouldn’t believe such a scenario. There are a lot of reasons why a company that thinks it might be able to take over the economy or even the world, would nonetheless not put that in its prospectus.
You understood my question correctly.
I like your hypothesis. I can add one more reason to behave this way—managing expectations. In your example if you make $1 trillion revenue prediction, that would catch attention & the market might incorporate that prediction early. But if then for some reason the real revenue is somewhat below the prediction, the expectation wouldn’t be met.
On the other hand, if you raise money, you likely want your valuation to be as high as possible.
There is likely to be a stage in agents replacing knowledge workers where a model can go away and reliably do a well-defined task that would take a human about a day without getting stuck (or at least note that it has got stuck and ask for help). At that point, agents will be able to do a lot of economically valuable work, but they will also need a lot of human task definition, managing, handholding, helping them get unstuck, and checking their output — and humans who have learnt to be good at doing all that will be at a premium. Shortly after that, the agents will be able to do tasks that would take a human a couple of days, and then a week, then a couple of weeks, and the ratio of human supervision work to AI work will decrease rapidly.
That is to say there are intermediate stages between “AI is useless as anything other than a tool” to “AI can completely replace all knowledge workers”. Given how fast capabilities are growing (METR’s results on this suggest practicable task lengths double every 4-6 months), these intermediate stages will be shortish, but during them applying AI successfully will require a huge amount of change to how companies do business and training a lot of humans in how to hand-hold AI — something that is slow and expensive. So there is likely to be overhang for a while, at least until we get all the way to the “drop-in-replacement knowledge worker” stage. So perhaps OpenAI is being cautious about adoption rates, because they understand that the adoption process will be problematic and uneven.
Also, AI capabilities tend to be surprisingly uneven by human standards. If ¾ of the work can be automated and ¼ can’t, progress rapidly becomes bottlenecked on whatever the AI can’t do. Ditto across industries: if some quickly reach full AI-automation and others don’t, the price of the outputs of the fully-automated ones nosedives to near the cost of the inputs, while conversely the price of whatever hasn’t been automated goes up, as it becomes a scarce resource in the growing economy.
I agree with this, but I expect these intermediate stages to be in the past by 2030. E.g. in “Measuring AI Ability to Complete Long Tasks” METR predicts (see “A sensitivity analysis of the extrapolated date...” graph) that in 2027-2031 AI will be able to perform with 50% success rate tasks taking a human 1 work-month. Even when AI can do tasks equivalent to 1 day of work, there is a huge incentive to use it. If we assume task duration doubling every 7 months, 1 work-day = 8 work-hours should be reached in 2 years.
Hypothesis:
OpenAI believe they won’t be able to lock-in market share. That is, they expect competitors will take a large share of the industry.
What I consider to be a more likely hypothesis: they have to tell investors something that sounds sane to investors, so they are giving them an in-distribution prediction based on the performance of past companies. OpenAI might not believe it, but investors will, and that’s all that matters in for the purposes of why they are making these projections.