Yep—this is also my current mental model for agent vs human performance.
Suppose that initially, frontier AIs are broadly superhuman when given very small time budgets, but subhuman when given large time budgets.
But why would we expect this?
Probably because LLMs train vastly longer—they have several OOM more experience than a human, but almost entirely on short term tasks (easier to acquire datasets for) well below one context window (similar to the one-day 500k ish token equivalent context window of the hippocampus wake cycle). This is a side effect of their current data-inefficient training/learning process/algorithms, which itself is a consequence of their unique compute economics (near zero cost to copy, so naturally the first economically viable AI to match/surpass humans will be vastly more expensive to train, because you can amortize that cost).
Yep—this is also my current mental model for agent vs human performance.
But why would we expect this?
Probably because LLMs train vastly longer—they have several OOM more experience than a human, but almost entirely on short term tasks (easier to acquire datasets for) well below one context window (similar to the one-day 500k ish token equivalent context window of the hippocampus wake cycle). This is a side effect of their current data-inefficient training/learning process/algorithms, which itself is a consequence of their unique compute economics (near zero cost to copy, so naturally the first economically viable AI to match/surpass humans will be vastly more expensive to train, because you can amortize that cost).