[Linkpost] Growth in FLOPS used to train ML models

This is a linkpost for https://​​shape-of-code.com/​​2022/​​03/​​13/​​growth-in-flops-used-to-train-ml-models/​​

Given the ongoing history of continually increasing compute power, what is the maximum compute power that might be available to train ML models in the coming years?