It seems to me that even a hardcore skeptic of AI 2027 would have been unlikely to predict a much larger error.
As someone who could perhaps be termed as such, my expectations regarding the technical side of things only start to significantly diverge at the start of 2027. (I’m not certain of Agent-1 1.5x’ing AI research speed, but I can see that.[1] The rest seems more or less priced-in.) And indeed, the end of 2026 is the point where, the forecast itself admits, its uncertainty increases and its predictions get less grounded.
Specifically, the point where I get off the ride is this one:
OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time.
My understanding is that Agent-2 essentially “closes the loop” on automated AI R&D, and while human input is still useful due to worse taste, it’s no longer required. That’s the part that seems like a “jump” to me, not a common-sensical extrapolation, and which I mostly expect not to happen.
Because I am really confused about how much AI is accelerating research/programming now, so I have no idea what number to extrapolate. Maybe it gets so good at fooling people into thinking they’re being incredibly productive by managing 50 agents at once that it slows research down by 50% instead?
Out of my own curiosity, if the real world plays out as you anticipate, and agent-2 does not close the loop, how much further back does that delay your timelines? Do you think that something like agent-3 or agent-4 could close the loop, or do you think it is further off than even that?
As someone who could perhaps be termed as such, my expectations regarding the technical side of things only start to significantly diverge at the start of 2027. (I’m not certain of Agent-1 1.5x’ing AI research speed, but I can see that.[1] The rest seems more or less priced-in.) And indeed, the end of 2026 is the point where, the forecast itself admits, its uncertainty increases and its predictions get less grounded.
Specifically, the point where I get off the ride is this one:
My understanding is that Agent-2 essentially “closes the loop” on automated AI R&D, and while human input is still useful due to worse taste, it’s no longer required. That’s the part that seems like a “jump” to me, not a common-sensical extrapolation, and which I mostly expect not to happen.
Because I am really confused about how much AI is accelerating research/programming now, so I have no idea what number to extrapolate. Maybe it gets so good at fooling people into thinking they’re being incredibly productive by managing 50 agents at once that it slows research down by 50% instead?
Out of my own curiosity, if the real world plays out as you anticipate, and agent-2 does not close the loop, how much further back does that delay your timelines? Do you think that something like agent-3 or agent-4 could close the loop, or do you think it is further off than even that?