Currently, intellectual labor from machine learning researchers costs a lot of compute. A $1M/year ML researcher costs the same as having 30 or so H100s. At the point where you have AGI, you can probably run the equivalent of one ML researcher with substantially less hardware than that. (I’m amortizing, presumably you’ll be running your models on multiple chips doing inference on multiple requests simultaneously.) This means that some ways to convert intellectual labor into compute efficiency will be cost-effective when they weren’t previously. So I expect that ML will become substantially more labor-intensive and have much more finicky special casing.
The way I’d think about this is:
Currently, intellectual labor from machine learning researchers costs a lot of compute. A $1M/year ML researcher costs the same as having 30 or so H100s. At the point where you have AGI, you can probably run the equivalent of one ML researcher with substantially less hardware than that. (I’m amortizing, presumably you’ll be running your models on multiple chips doing inference on multiple requests simultaneously.) This means that some ways to convert intellectual labor into compute efficiency will be cost-effective when they weren’t previously. So I expect that ML will become substantially more labor-intensive and have much more finicky special casing.