Conditional on a slowdown in AI progress, my primary hypothesis is that the problem is that recent AI models haven’t scaled much in compute compared to past models and have relied on RL progress, and current RL is becoming less and less of a free lunch than before and is actually less efficient than pre-training.
Which is a slight update against software-only singularity stories occurring.
Conditional on a slowdown in AI progress, my primary hypothesis is that the problem is that recent AI models haven’t scaled much in compute compared to past models and have relied on RL progress, and current RL is becoming less and less of a free lunch than before and is actually less efficient than pre-training.
Which is a slight update against software-only singularity stories occurring.