I believe intelligence is pretty sophisticated while others seem to think it’s mostly brute force. This tangent would however require a longer discussion on the proper interpretation of Sutton’s bitter lesson.
I’d be interested in seeing this point fleshed out, as it’s a personal crux of mine (and I expect many others). The bullish argument which I’m compelled by goes something along the lines of:
Bitter Lesson: SGD is a much better scalable optimizer than you, and we’re bringing it to pretty stupendous scales
Lots of Free Energy in Research Engineering: My model of R&D in frontier AI is that it is often blocked by a lot of tedious and laborious engineering. It doesn’t take a stroke of genius to think of RL on CoT; it took (comparatively) quite a while to get it to work.
Low Threshold in Iterating Engineering Paradigms: Take a technology, scale it, find it’s limits, pivot, repeat. There were many legitimate arguments floating around last year around the parallelism tradeoff and shortcut generalization which seemed to suggest limits of scaling pretraining. I take these to basically be correct, it just wasn’t thathard to pivot towards a nearby paradigm which didn’t face similar limits. I expect similar arguments to crop up around the limits of model-free RL, or OOD generalization of training on verifiable domains, or training on lossy representations of the real world (language), or inference on fixed weight recurrence, or… I expect (many) of them to basically be correct, I just don’t expect the pivot towards a scalable solution to these to be that hard. Or in other words, I expect that much of the effort that comes from unlocking these new engineering paradigms to be made up of engineering hours which we expect to be largely automated.
I’d be interested in seeing this point fleshed out, as it’s a personal crux of mine (and I expect many others). The bullish argument which I’m compelled by goes something along the lines of:
Bitter Lesson: SGD is a much
betterscalable optimizer than you, and we’re bringing it to pretty stupendous scalesLots of Free Energy in Research Engineering: My model of R&D in frontier AI is that it is often blocked by a lot of tedious and laborious engineering. It doesn’t take a stroke of genius to think of RL on CoT; it took (comparatively) quite a while to get it to work.
Low Threshold in Iterating Engineering Paradigms: Take a technology, scale it, find it’s limits, pivot, repeat. There were many legitimate arguments floating around last year around the parallelism tradeoff and shortcut generalization which seemed to suggest limits of scaling pretraining. I take these to basically be correct, it just wasn’t that hard to pivot towards a nearby paradigm which didn’t face similar limits. I expect similar arguments to crop up around the limits of model-free RL, or OOD generalization of training on verifiable domains, or training on lossy representations of the real world (language), or inference on fixed weight recurrence, or… I expect (many) of them to basically be correct, I just don’t expect the pivot towards a scalable solution to these to be that hard. Or in other words, I expect that much of the effort that comes from unlocking these new engineering paradigms to be made up of engineering hours which we expect to be largely automated.