Exactly. Future is hard to predict and the author’s strong confidence seems suspicious to me. Improvements came fast last years.
2013-2014 : word2vec and seq2seq
2017 : transformer and gpt-1
2022 : CoT prompting
2023 multimodal LLMs
2024 reasonning models.
Are they linear improvements or revolutionnary breakthroughs ? Time will tell, but to me there is no sharp frontier between increment and breakthrough. It might happen that AGI results from such improvements, or not. We just don’t know. But it’s a fact that human general intelligence resulted from a long chain of tiny increments, and I also observe that results in ARC-AGI bench exploded with CoT/reasoning models (not just math or coding benchs). So, while 2025 could be a relative plateau, I won’t be so sure that next years will also. To me a confidence far from 50% is hard to justify.
Exactly. Future is hard to predict and the author’s strong confidence seems suspicious to me. Improvements came fast last years.
2013-2014 : word2vec and seq2seq
2017 : transformer and gpt-1
2022 : CoT prompting
2023 multimodal LLMs
2024 reasonning models.
Are they linear improvements or revolutionnary breakthroughs ? Time will tell, but to me there is no sharp frontier between increment and breakthrough. It might happen that AGI results from such improvements, or not. We just don’t know. But it’s a fact that human general intelligence resulted from a long chain of tiny increments, and I also observe that results in ARC-AGI bench exploded with CoT/reasoning models (not just math or coding benchs). So, while 2025 could be a relative plateau, I won’t be so sure that next years will also. To me a confidence far from 50% is hard to justify.