The nuance was in saying that their framework can’t predict whether or not data or compute scaling made the majority of improvements, nor can they separate out data and compute improvements, but the core finding of algorithmic efficiency being almost all compute-scaling dependent still holds, so if we had a fixed stock of compute now, we would essentially have 0 improvements in AI forever.
The nuance was in saying that their framework can’t predict whether or not data or compute scaling made the majority of improvements, nor can they separate out data and compute improvements, but the core finding of algorithmic efficiency being almost all compute-scaling dependent still holds, so if we had a fixed stock of compute now, we would essentially have 0 improvements in AI forever.