I appreciate the depth of your analysis, and I think your “bear case” on AI progress highlights real concerns that many in the field share. However, I’d like to offer an alternative lens—one that doesn’t necessarily contradict your view but expands it.
The assumption that AI progress will hit diminishing returns is reasonable if we view intelligence as a function of compute, scaling laws, and training efficiency alone. But what if the real breakthrough isn’t just more data, bigger models, or even architectural improvements? What if it comes from a shift in how intelligence itself is conceptualized?
We are still locked into a paradigm where AI is seen as an optimization process, a tool that maximizes objectives within predefined boundaries. But intelligence—especially when viewed through the lens of fluid adaptation, emergent agency, and self-restructuring systems—might not follow the same scaling limitations we expect.
History suggests that major leaps don’t come from linear extrapolation but from conceptual phase shifts. The way deep learning itself blindsided GOFAI models was an example of this. It wasn’t just “better algorithms”—it was a fundamentally different way of thinking about learning.
What if the next phase shift isn’t more powerful transformers, but something that doesn’t look like a model at all? Something that integrates relational intelligence, environmental feedback loops, and real-time self-modification beyond gradient descent?
If that happens, many of the bottlenecks you predict may not be constraints at all, but symptoms of trying to push one paradigm too far instead of moving to the next one.
Would love to hear your thoughts on whether you see this as a possibility, or if you think we are still constrained by the fundamental limits outlined in your post.
I appreciate the depth of your analysis, and I think your “bear case” on AI progress highlights real concerns that many in the field share. However, I’d like to offer an alternative lens—one that doesn’t necessarily contradict your view but expands it.
The assumption that AI progress will hit diminishing returns is reasonable if we view intelligence as a function of compute, scaling laws, and training efficiency alone. But what if the real breakthrough isn’t just more data, bigger models, or even architectural improvements? What if it comes from a shift in how intelligence itself is conceptualized?
We are still locked into a paradigm where AI is seen as an optimization process, a tool that maximizes objectives within predefined boundaries. But intelligence—especially when viewed through the lens of fluid adaptation, emergent agency, and self-restructuring systems—might not follow the same scaling limitations we expect.
History suggests that major leaps don’t come from linear extrapolation but from conceptual phase shifts. The way deep learning itself blindsided GOFAI models was an example of this. It wasn’t just “better algorithms”—it was a fundamentally different way of thinking about learning.
What if the next phase shift isn’t more powerful transformers, but something that doesn’t look like a model at all? Something that integrates relational intelligence, environmental feedback loops, and real-time self-modification beyond gradient descent?
If that happens, many of the bottlenecks you predict may not be constraints at all, but symptoms of trying to push one paradigm too far instead of moving to the next one.
Would love to hear your thoughts on whether you see this as a possibility, or if you think we are still constrained by the fundamental limits outlined in your post.