I didn’t see Kaj Sotala’s “Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI” until yesterday, or I would have replied sooner. I wrote a reply last night and today, which got long enough that I considered making it a post, but I feel like I’ve said enough top-level things on the topic until I have data to share (within about a month hopefully!).
But if anyone’s interested to see my current thinking on the topic, here it is.
Kinda Contra Kaj on LLM Scaling
I didn’t see Kaj Sotala’s “Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI” until yesterday, or I would have replied sooner. I wrote a reply last night and today, which got long enough that I considered making it a post, but I feel like I’ve said enough top-level things on the topic until I have data to share (within about a month hopefully!).
But if anyone’s interested to see my current thinking on the topic, here it is.