This is a story about a trend in total spending and the financial constraints it runs into. Training systems with cheaper chips would offer more compute (using even more chips), not ask for less money. There still won’t be a $770bn training system in 2030, but a $140bn training system might hold more compute, concentrating even more scaling in 2022-2029 and leaving less for 2030-2050 (where the cost of chip manufacturing eventually dominates). Google probably already has this advantage, and AWS is getting there with Trainium.
(More carefully, the 2030-2050 scaling by another 2,000x is also an oversimplification, since AI will be integrating into the economy, and so even at the current level of AI capabilities the largest AI companies will be gradually getting wealthier. Also when training system growth is slower and the training methodology more settled, so too training runs will get longer than ~3.5 months, increasing training compute per model.)
This is a story about a trend in total spending and the financial constraints it runs into. Training systems with cheaper chips would offer more compute (using even more chips), not ask for less money. There still won’t be a $770bn training system in 2030, but a $140bn training system might hold more compute, concentrating even more scaling in 2022-2029 and leaving less for 2030-2050 (where the cost of chip manufacturing eventually dominates). Google probably already has this advantage, and AWS is getting there with Trainium.
(More carefully, the 2030-2050 scaling by another 2,000x is also an oversimplification, since AI will be integrating into the economy, and so even at the current level of AI capabilities the largest AI companies will be gradually getting wealthier. Also when training system growth is slower and the training methodology more settled, so too training runs will get longer than ~3.5 months, increasing training compute per model.)