Given the current AI paradigm, those intelligence and horizon are strongly correlated, and it’s not immediately obvious to me how you’d break that correlation. But I don’t know that they need to be correlated in principle.
Case in point: LLMs are already smarter than most humans on 15-second time scales, but they are considerably worse than 100-IQ humans at long-term tasks.
How? This doesn’t feel possible.
Given the current AI paradigm, those intelligence and horizon are strongly correlated, and it’s not immediately obvious to me how you’d break that correlation. But I don’t know that they need to be correlated in principle.
Case in point: LLMs are already smarter than most humans on 15-second time scales, but they are considerably worse than 100-IQ humans at long-term tasks.