I don’t buy this claim. Just think about what a time horizon of a thousand years means: this is a task that would take an immortal CS graduate a thousand years to accomplish, with full internet access and the only requirement being that they can’t be assisted another person or an LLM. An AI that could accomplish this type of task with 80% accuracy would be a superintelligence. And an infinite time horizon, interpreted literally, would be a task that a human could only accomplish if given an infinite amount of time. I think given a Graham’s number of years a human could accomplish a lot, so I don’t think the idea that time horizons should shoot to infinity is reasonable.
But importantly, the AI would get the same resources as the human! If a CS graduate would need 1000 years to accomplish the task, the AI would get proportionally more time. So the AI wouldn’t have to be a superintelligence anymore than an immortal CS graduate is a superintelligence.
Similarly, given a Graham’s number of years a human could accomplish a lot. But given a Graham’s number of years, an AI could also accomplish a lot.
Overall, the point is just that: If you think that broadly superhuman AI is possible, then it should be possible to construct an AI that can match humans on tasks of any time horizon (as long as the AI gets commensurate time).
But you can’t use that same “let’s be patient” logic of how to interpret time horizons to go back and have the improving problem-solving capability represented by those time horizons be the driver of the hypothesized superexponential growth in fixed-width time steps.
Consider: the proposed model says that some time in 2029, the 80% time horizon on cutting edge AI models will increase by 100 orders of magnitude within a span of nanoseconds. How is an LLM supposed to make self-improvements on the order of googol-sized steps, which for all we know is itself a very long horizon difficulty task, in less time than it takes for an electron to cross the width of a CPU?
You’re totally right that long-time-horizon AI grades are feasible and meaningful if you grant them the ability to work at it for a proportionate fraction of time, but then they aren’t compatible with the AI2027 story as a metric that relates to development speed.
But importantly, the AI would get the same resources as the human! If a CS graduate would need 1000 years to accomplish the task, the AI would get proportionally more time. So the AI wouldn’t have to be a superintelligence anymore than an immortal CS graduate is a superintelligence.
Similarly, given a Graham’s number of years a human could accomplish a lot. But given a Graham’s number of years, an AI could also accomplish a lot.
Overall, the point is just that: If you think that broadly superhuman AI is possible, then it should be possible to construct an AI that can match humans on tasks of any time horizon (as long as the AI gets commensurate time).
But you can’t use that same “let’s be patient” logic of how to interpret time horizons to go back and have the improving problem-solving capability represented by those time horizons be the driver of the hypothesized superexponential growth in fixed-width time steps.
Consider: the proposed model says that some time in 2029, the 80% time horizon on cutting edge AI models will increase by 100 orders of magnitude within a span of nanoseconds. How is an LLM supposed to make self-improvements on the order of googol-sized steps, which for all we know is itself a very long horizon difficulty task, in less time than it takes for an electron to cross the width of a CPU?
You’re totally right that long-time-horizon AI grades are feasible and meaningful if you grant them the ability to work at it for a proportionate fraction of time, but then they aren’t compatible with the AI2027 story as a metric that relates to development speed.