Ok, my basic viewpoint is that LLMs will plateau, but not before being able (combined with lots of human effort) to create LLMs++ (some unspecified improvement over existing LLMs). And that this process will repeat. I think you are right that there are some architectural / algorithmic changes that will have to be made before we get to a fully capable AGI at anything like near-term available compute. What I don’t see is how we can expect that the necessary algorithmic changes won’t be stumbled upon by a highly engineered automation process trying literally millions of different ideas while iterating over all published open source code and academic papers. I describe my thoughts in more detail here: https://www.lesswrong.com/posts/zwAHF5tmFDTDD6ZoY/will-gpt-5-be-able-to-self-improve
Ok, my basic viewpoint is that LLMs will plateau, but not before being able (combined with lots of human effort) to create LLMs++ (some unspecified improvement over existing LLMs). And that this process will repeat. I think you are right that there are some architectural / algorithmic changes that will have to be made before we get to a fully capable AGI at anything like near-term available compute. What I don’t see is how we can expect that the necessary algorithmic changes won’t be stumbled upon by a highly engineered automation process trying literally millions of different ideas while iterating over all published open source code and academic papers. I describe my thoughts in more detail here: https://www.lesswrong.com/posts/zwAHF5tmFDTDD6ZoY/will-gpt-5-be-able-to-self-improve