Some take off scenario projects the compute available for LLM-led experiments?
I assume today you can model developments per human AI researcher as: number of experiments X number of FLOPS X research taste.
Over time, you have a lower bound on how many FLOPS you need for an experiment.
My question is how realistic is the improvement by having LLMs doing AI research, to the extent this means the LLMs will be conducting the experiments themselves. You’d probably need to 100-1000x the compute available to small scale experiments to see any meaningful acceleration.
Some take off scenario projects the compute available for LLM-led experiments?
I assume today you can model developments per human AI researcher as: number of experiments X number of FLOPS X research taste.
Over time, you have a lower bound on how many FLOPS you need for an experiment.
My question is how realistic is the improvement by having LLMs doing AI research, to the extent this means the LLMs will be conducting the experiments themselves. You’d probably need to 100-1000x the compute available to small scale experiments to see any meaningful acceleration.
Thoughts?
Inspired by this tweet: https://x.com/chrispainteryup/status/2020738025907712225?s=46