I talk about this in the Granular Analysis subsection, but I’ll elaborate a bit here.
I think that hundreds of thousands of cheap labor hours for curation is a reasonable guess, but this likely comes to under a million dollars in total which is less than 1% of the total.
I have not seen any substantial evidence of OpenAI paying for licenses before the training of GPT-4, much less the sort of expenditures that would move the needle on the total cost.
After training GPT-4 we do see things like a deal between OpenAI and the Associated Press (also see this article on that which mentions a first mover clause) with costs looking to be in the millions—more than 1% of the cost of GPT-4 but notably it seems that this came after GPT-4. I expect GPT-5, which this sort of deal might be relevant for, to cost substantially more. It’s possible I’m wrong about the timing and substantial deals of this sort were in fact made before GPT-4 but I have not seen substantive evidence of this.
I think using the term”training run” in that first bullet point is misleading, and “renting the compute” is confusing since you can’t actually rent the compute just by having $60M, you likely need to have a multi-year contract.
I can’t tell if you’re attributing the hot takes to me? I do not endorse them.
This is because I’m specifically talking about 2022, and ChatGPT was only released at the very end of 2022, and GPT-4 wasn’t released until 2023.
Good catch, I think the 30x came from including the advantage given by tensor cores at all and not just lower precision data types.
This is probably the decision I make I am the least confident in, figuring out how to do accounting on this issue is challenging and depends a lot on what one is going to use the “cost” of a training run to reason about. Some questions I had in mind when thinking about cost:
If a lone actor want to train a frontier model, without loans or financial assistance from others, how much capitol might they need.
How much money should I expect to have been spent by an AI lab that trains a new frontier model, especially a frontier model that is a significant advancement over all prior models (like GPT-4 was).
What is the largest frontier model it is feasible to create by any entity.
When a company trains a frontier model, how much are they “betting” on the future profitability of AI?
The simple initial way I use to compute cost than is to investigate empirical evidence of the expenditures of companies and investment.
Now, these numbers aren’t the same ones a company might care about—they represent expenses without accounting for likely revenue. The argument I find most tempting is that one should look at deprecation cost instead of capital expenditure, effectively subtracting the expected resale value of the hardware from the initial expenditure to purchase the hardware. I have two main reasons for not using this:
Computing deprecation cost is really hard, especially in this rapidly changing environment.
The resale value of an ML GPU is likely closely tied to profitability of training a model—if it turns out that using frontier models for inference isn’t very profitable than I’d expect the value of ML GPUs to decrease. Conversely, if inference is very profitable than the resale value would increase. I think A100s for example have had their price substantially impacted by increased interest in AI - it’s not implausible to me that the resale value of an A100 is actually higher than the initial cost was for OpenAI.
Having said all of this, I’m still not confident I made the right call here.
Also, I am relatively confident GPT-4 was trained only with A100s, and did not use any V100s as the colab notebook you linked speculates. I expect that GPT-3, GPT-4, and GPT-5 will all be trained with different generations of GPUs.
So, it’s true that NVIDIA probably has very high markup on their ML GPUs. I discuss this a bit in the NVIDIA’s Monopoly section, but I’ll add a bit more detail here.
Google’s TPU v4 seems to be competitive with the A100, and has similar cost per hour.
I think the current prices do in fact reflect demand.
My best guess is that the software licensing would not be a significant barrier for someone spending hundreds of millions of dollars on a training run.
Even when accounting for markup a quick rough estimate still implies a fairly significant gap vs gaming GPUs that FLOPs/$ don’t account for, though it does shrink that gap considerably.
All this aside, my basic take is that I think “what people are actually paying” is the most straightforward and least speculative means we have of defining near term “cost”.
75-80% for H100 and … 40-50% for gaming would be my guess?
Being generous, I get 0.2*24000/(1,599*0.6) implies the H100 costs > 5x to manufacture than the RTX4090 despite having closer to 3x the FLOP/s.
I think communicating clearly with the word “woman” is entirely possible for many given audiences. In many communities, there exists an internal consensus as to what region of the conceptual map the word woman refers to. The variance of language between communities isn’t confined to the word “woman”—in much of the world the word “football” means what American’s mean by “soccer”. Where I grew up i understood the tristate area to be NY, PA, and NJ—however the term “the tristate area” is understood by other groups to mean one of … a large number of options.
(Related point: I’m not at all convinced that differing definitions of words is a problem that needs a permanent solution. It seems entirely plausible to me that this allows for beneficial evolution of language as many options spawn and compete with each other.)
Manifold.markets is play-money only, no real money required. And users can settle the markets they make themselves, so if you make the market you don’t have to worry about loopholes (though you should communicate as clearly as possible so people aren’t confused about your decisions).
I’m specifically interested in finding something you’d be willing to bet on—I can’t find an existing manifold market, would you want to create one that you can decide? I’d be fine trusting your judgment.
I’m a bit confused where you’re getting your impression of the average person / American, but I’d be happy to bet on LLMs that are at least as capable as GPT3.5 being used (directly or indirectly) on at least a monthly basis by the majority of Americans within the next year?
I think that null hypothesis here is that nothing particularly deep is going on, and this is essentially GPT producing basically random garbage since it wasn’t trained on the petertodd token. I’m weary of trying to extract too much meaning from these tarot cards.
I think point (2) of this argument either means something weaker then it needs to for this rest of the argument to go through or is just straightforwardly wrong.
If OpenAI released a weakly general (but non-singularity inducing) GPT5 tomorrow, it would pretty quickly have significant effects on people’s everyday lives. Programmers would vaguely described a new feature and the AI would implement it, AIs would polish any writing I do, I would stop using google to research things and instead just chat with the AI and have it explain such-and-such paper I need for my work. In their spare time people would read custom books (or watch custom movies) tailored to their extremely niche interests. This would have a significant impact on the everyday lives of people within a month.
It seems concievable that somehow the “socio-economic benefits” wouldn’t be as significant that quickly—I don’t really know what “socio-economic benefits” are exactly.
However, the rest of your post seems to treat point (2) as proving that there would be no upside from a more powerful AI being released sooner. This feels like a case of a fancy clever theory confusing an obvious reality: better AI would impact a lot of people very quickly.
Relevance of prior Theoretical ML work to alignment, research on obfuscation in theoretical cryptography as it relates to interpretability, theory underlying various phenomena such as grokking. Disclaimer: This list is very partial and just thrown together.
Hm, yeah that seems like a relevant and important distinction.
I think I was envisioning profoundness as humans can observe it to be primarily an aesthetic property, so I’m not sure I buy the concept of “actually” profoundness, though I don’t have a confident opinion about this.
I think on the margin new alignment researchers should be more likely to work on ideas that seem less deep then they currently seem to me to be.
Working on a wide variety of deep ideas does sound better to me than working on a narrow set of them.
If something seems deep, it touches on stuff that’s important and general, which we would expect to be important for alignment.
The specific scenario I talk about in the paragraph you’re responding too is one where everything except for the sense of deepness is the same for both ideas, such that someone who doesn’t have a sense of what ideas are deep or profound would find the ideas basically equivalent. In such a scenario my argument is that we should expect the deep idea to receive a more attention, despite their not existing legible or well grounded reasons for this. Some amount of preference for the deep idea might be justifiable on the grounds of trusting intuitive insight, but I don’t think the record of intuitive insight as to what ideas are good is actually very impressive—there are a huge amount of ideas that didn’t work out that sounded deep (see some philosophy, psychoanalysis, ect.) and very few that did work out.
try to recover from the sense of deepness some pointers at what seemed deep in the research projects
I think on the margin new theoretical alignment researchers should do less of this, as I think most deep sounding ideas just genuinely aren’t very productive to research and aren’t amenable to being proven to be unproductive to work on—often times the only evidence that a deep idea isn’t productive to work on is that nothing concrete has come of it yet.
I don’t have empirical analysis showing this—I would probably gesture to various prior alignment research projects to support this if I had to, though I worry that would devolve into arguing about what ‘success’ meant.