Your very first point is, to be a little uncharitable, ‘maybe OpenAI’s whole product org is fake.’ I know you have a disclaimer here but you’re talking about a product category that didn’t exist 30 months ago that today has this one website now reportedly used by 10% of people in the entire world and that the internet is saying expects ~12B revenue this year.
If your vibes are towards investing in that class of thing being fake or ‘mostly a hype machine’ then your vibes are simply not calibrated well in this domain.
No, the model here is entirely consistent with OpenAI putting out some actual cool products. Those products (under the model) just aren’t on a path to AGI, and OpenAI’s valuation is very much reliant on being on a path to AGI in the not-too-distant future. It’s the narrative about building AGI which is fake.
OpenAI’s valuation is very much reliant on being on a path to AGI in the not-too-distant future.
Really? I’m mostly ignorant on such matters, but I’d thought that their valuation seemed comically low compared to what I’d expect if their investors thought that OpenAI was likely to create anything close to a general superhuman AI system in the near future.[1] I considered this evidence that they think all the AGI/ASI talk is just marketing.
Well ok, if they actually thought OpenAI would create superintelligence as I think of it, their valuation would plummet because giving people money to kill you with is dumb. But there’s this space in between total obliviousness and alarm, occupied by a few actually earnest AI optimists. And, it seems to me, not occupied by the big OpenAI investors.
Consider, in support: Netflix has a $418B market cap. It is inconsistent to think that a $300B valuation for OpenAI or whatever’s in the news requires replacing tens of trillions of dollars of capital before the end of the decade.
Similarly, for people wanting to argue from the other direction, who might think a low current valuation is case-closed evidence against their success chances, consider that just a year ago the same argument would have discredited how they are valued today, and a year before that would have discredited where they were a year ago, and so forth. This holds similarly for historic busts in other companies. Investor sentiment is informational but clearly isn’t definitive, else stocks would never change rapidly.
Similarly, for people wanting to argue from the other direction, who might think a low current valuation is case-closed evidence against their success chances
To be clear: I think the investors would be wrong to think that AGI/ASI soon-ish isn’t pretty likely.
But most of your criticisms in the point you gave have ~no bearing on that? If you want to make a point about how effectively OpenAI’s research moves towards AGI you should be saying things relevant to that, not giving general malaise about their business model.
Or, I might understand ‘their business model is fake which implies a lack of competence about them broadly,’ but then I go back to the whole ‘10% of people in the entire world’ and ‘expects 12B revenue’ thing.
The point of listing the problems with their business model is that they need the AGI narrative in order to fuel the investor cash, without which they will go broke at current spend rates. They have cool products, they could probably make a profit if they switched to optimizing for that (which would mean more expensive products and probably a lot of cuts), but not anywhere near the level of profits they’d need to justify the valuation.
That’s how I interpreted it originally; you were arguing their product org vibed fake, I was arguing your vibes were miscalibrated. I’m not sure what to say to this that I didn’t say originally.
Your very first point is, to be a little uncharitable, ‘maybe OpenAI’s whole product org is fake.’ I know you have a disclaimer here but you’re talking about a product category that didn’t exist 30 months ago that today has this one website now reportedly used by 10% of people in the entire world and that the internet is saying expects ~12B revenue this year.
If your vibes are towards investing in that class of thing being fake or ‘mostly a hype machine’ then your vibes are simply not calibrated well in this domain.
No, the model here is entirely consistent with OpenAI putting out some actual cool products. Those products (under the model) just aren’t on a path to AGI, and OpenAI’s valuation is very much reliant on being on a path to AGI in the not-too-distant future. It’s the narrative about building AGI which is fake.
Really? I’m mostly ignorant on such matters, but I’d thought that their valuation seemed comically low compared to what I’d expect if their investors thought that OpenAI was likely to create anything close to a general superhuman AI system in the near future.[1] I considered this evidence that they think all the AGI/ASI talk is just marketing.
Well ok, if they actually thought OpenAI would create superintelligence as I think of it, their valuation would plummet because giving people money to kill you with is dumb. But there’s this space in between total obliviousness and alarm, occupied by a few actually earnest AI optimists. And, it seems to me, not occupied by the big OpenAI investors.
Consider, in support: Netflix has a $418B market cap. It is inconsistent to think that a $300B valuation for OpenAI or whatever’s in the news requires replacing tens of trillions of dollars of capital before the end of the decade.
Similarly, for people wanting to argue from the other direction, who might think a low current valuation is case-closed evidence against their success chances, consider that just a year ago the same argument would have discredited how they are valued today, and a year before that would have discredited where they were a year ago, and so forth. This holds similarly for historic busts in other companies. Investor sentiment is informational but clearly isn’t definitive, else stocks would never change rapidly.
To be clear: I think the investors would be wrong to think that AGI/ASI soon-ish isn’t pretty likely.
But most of your criticisms in the point you gave have ~no bearing on that? If you want to make a point about how effectively OpenAI’s research moves towards AGI you should be saying things relevant to that, not giving general malaise about their business model.
Or, I might understand ‘their business model is fake which implies a lack of competence about them broadly,’ but then I go back to the whole ‘10% of people in the entire world’ and ‘expects 12B revenue’ thing.
The point of listing the problems with their business model is that they need the AGI narrative in order to fuel the investor cash, without which they will go broke at current spend rates. They have cool products, they could probably make a profit if they switched to optimizing for that (which would mean more expensive products and probably a lot of cuts), but not anywhere near the level of profits they’d need to justify the valuation.
That’s how I interpreted it originally; you were arguing their product org vibed fake, I was arguing your vibes were miscalibrated. I’m not sure what to say to this that I didn’t say originally.