Thanks for the correction and references. I just followed my “common sense” from lectures and other pieces.
What do you think made AlexNet stand out? Is it the depth and use of GPUs?
Thanks for the correction and references. I just followed my “common sense” from lectures and other pieces.
What do you think made AlexNet stand out? Is it the depth and use of GPUs?
Thanks!
I’m working with a colleague on the trends of the three components (compute, memory, and interconnect) over time of compute systems and then comparing it to our best estimates for the human brain (or other biological anchors). However, this will still take some time but I hope we will be able to share it in the future (≈ till the end of the year).
Great post! I especially liked that you outlined potential emerging technologies and the economic considerations.
Having looked a bit into this when writing my TAI and Compute sequence, I agree with your main takeaways. In particular, I’d like to see more work on DRAM and the interconnect trends and potential emerging paradigms.
I’d be interested in you compute forecasts to inform TAI timelines. For example Cotra’s draft report assumes a doubling time of 2.5 years for the FLOPs/$ but acknowledges that this forecast could be easily improved by someone with more domain knowledge—that could be you.
Comparing custom ML hardware (e.g. Google’s TPUs or Baidu’s Kunlun, etc) is tricky to put on these sorts of comparisons. For those I think the MLPerf Benchmarks are super useful. I’d be curious to hear the authors’ expectations of how this research changes in the face of more custom ML hardware.
I’d be pretty excited to see more work on this. Jaime already shared our hardware sheet where we collect information on GPUs but as you outline that’s the peak performance and sometimes misleading.
Indeed, the MLPerf benchmarks are useful. I’ve already gathered their data in this sheet and would love to see someone playing around with it. Next to MLPerf, Lambda Labs also shares some standardized benchmarks.
co-author here
I like your idea. Nonetheless, it’s pretty hard to make estimates on “total available compute capacity”. If you have any points, I’d love to see them.
Somewhat connected is the idea of: What ratio of this progress/trend is due to computational power improvements versus increased spending? To get more insights on this, we’re currently looking into computing power trends and get some insights into the development of FLOPS/$ over time.
Thanks, appreciate the pointers!
Thanks for sharing your thoughts. As you already outlined, the report mentions at different occasions that the hardware forecasts are the least informed:
“Because they have not been the primary focus of my research, I consider these estimates unusually unstable, and expect that talking to a hardware expert could easily change my mind.”
This is partially the reason why I started looking into this a couple of months ago and still now on the side. A couple of points come to mind:
I discuss the compute estimate side of the report a bit in my TAI and Compute series. Baseline is that I agree with your caveats and list some of the same plots. However, I also go into some reasons why those plots might not be that informative for the metric we care about.
Many compute trends plots assume peak performance based on the specs sheet or a specific benchmark (Graph500). This does not translate 1:1 to “AI computing capabilities” (let’s refer to them as effective FLOPs). See a discussion on utilization in our estimate training compute piece and me ranting a bit on it in my appendix of TAI and compute.
I think the same caveat applies to the TOP500. I’d be interested in a Graph500 trend over time (Graph 500 is more about communication than pure processing capabilities).
Note that all of the reports and graphs usually refer to performance. Eventually, we’re interested in FLOPs/$.
Anecdotally, EleutherAI explicitly said that the interconnect was their bottleneck for training GPT-NeoX-20B.
What do you think about hardware getting cheaper? I summarize Cotra’s point here.
I don’t have a strong view here only “yeah seems plausible to me”.
Overall, there will either be room for improvement in chip design, or chip design will stabilize which enables the above outlined improvements in the economy of scale (learning curves). Consequently, if you believe that technological progress (more performance for the same price) might halt, the compute costs will continue decreasing, as we then get cheaper (same performance for a decreased price).
Overall, I think that you’re saying something “this can’t go on and the trend has already slowed down”. Whereas I think you’re pointing towards important trends, I’m somewhat optimistic that other hardware trends might be able to continue driving the progress in effective FLOP. E.g., most recently the interconnect (networking multiple GPUs and creating clusters). I think a more rigorous analysis of the last 10 years could already give some insights into which parts have been a driver of more effective FLOPs.
For this reason, I’m pretty excited about MLCommons benchmarks or something LambdaLabs—measuring the performance we might care about for AI.
Lastly, I’m working on better compute cost estimates and hoping to have something out in the next couple of months.
We basically lumped the reduced cost of FLOP per $ and increased spending together.
A report from CSET on AI and Compute projects the costs by using two strongly simplified assumptions: (I) doubling every 3.4 months (based on OpenAI’s previous report) and (II) computing cost stays constant. This could give you some ideas on rather upper bounds of projected costs.
Carey’s previous analysis uses this dataset from AI Impacts and therefore assumes:
[..] while the cost per unit of computation is decreasing by an order of magnitude every 4-12 years (the long-run trend has improved costs by 10x every 4 years, whereas recent trends have improved costs by 10x every 12 years).
Thanks for the comment! That sounds like a good and fair analysis/explanation to me.
I’m wondering: could one just continue training Gopher (the previous bigger model) on the newly added data?
Minor correction. You’re saying:
> So training a 1-million parameter model on 10 books takes about as many FLOPS as training a 10-million parameter model on one book.
You link to FLOP per second aka FLOPS, whereas you’re talking about the plural of FLOP, a quantity (often used is FLOPs).
It’s to our knowledge now the most compute intensive model ever trained.
Thanks for the feedback, Gunnar. You’re right—it’s more of a recap and introduction. I think the “newest” insight is probably the updates in Section 2.3.
I also would be curious to know in which aspects and questions you’re most interested in.