No, it’s possible for LLMs to solve a subset of those problems without being AGI (even conceivable, as the history of AI research shows we often assume tasks are AI complete when they are not e.g. Hofstader with chess, Turing with the Turing test).
I agree that the tests which are still standing are pretty close to AGI; this is not a problem with Thane’s list though. He is correctly avoiding the failure mode I just pointed it out.
Unfortunately, this does mean that we may not be able to predict AGI is imminent until the last moment. That is a consequence of the black-box nature of LLMs and our general confusion about intelligence.
No, it’s possible for LLMs to solve a subset of those problems without being AGI (even conceivable, as the history of AI research shows we often assume tasks are AI complete when they are not e.g. Hofstader with chess, Turing with the Turing test).
I agree that the tests which are still standing are pretty close to AGI; this is not a problem with Thane’s list though. He is correctly avoiding the failure mode I just pointed it out.
Unfortunately, this does mean that we may not be able to predict AGI is imminent until the last moment. That is a consequence of the black-box nature of LLMs and our general confusion about intelligence.