I can see an argument for why, tell me if this is what you’re thinking–
The biggest reason why LLM paradigm might never reach AI takeoff is that LLMs can only complete short-term tasks, and can’t maintain coherence over longer time scales (e.g. if an LLM writes something long, it will often start contradicting itself). And intuitively it seems that scaling up LLMs hasn’t fixed this problem. However, this paper shows that LLMs have been getting better at longer-term tasks, so LLMs probably will scale to AGI.
Why do you think this narrows the distribution?
I can see an argument for why, tell me if this is what you’re thinking–