Software engineer and repeat startup founder; best known for Writely (aka Google Docs). Now starting https://www.aisoup.org to foster constructive expert conversations about open questions in AI and AI policy, and posting at https://amistrongeryet.substack.com and https://x.com/snewmanpv.
snewman
quantity of useful environments that AI companies have
Meaning, the number of distinct types of environments they’ve built (e.g. one to train on coding tasks, one on math tasks, etc.)? Or the number of instances of those environments they can run (e.g. how much coding data they can generate)?
GPT-4.5 is going to be quickly deprecated
It’s still a data point saying that OpenAI chose to do a large training run, though, right? Even if they’re currently not planning to make sustained use of the resulting model in deployment. (Also, my shaky understanding is that expectations are for a GPT-5 to be released in the coming months and that it may be a distilled + post-trained derivative of GPT-4.5, meaning GPT-5 would be downstream of a large-compute-budget training process?)
Updates from Comments on “AI 2027 is a Bet Against Amdahl’s Law”
Oops, I forgot to account for the gap from 50% success rate to 80% success (and actually I’d argue that the target success rate should be higher than 80%).
Also potential factors for “task messiness” and the 5-18x context penalty, though as you’ve pointed out elsewhere, the latter should arguably be discounted.
Agreed that we should expect the performance difference between high- and low-context human engineers to diminish as task sizes increase. Also agreed that the right way to account for that might be to simply discount the 5-18x multiplier when projecting forwards, but I’m not entirely sure. I did think about this before writing the post, and I kept coming back to the view that when we measure Claude 3.7 as having a 50% success rate at 50-minute tasks, or o3 at 1.5-hour tasks, we should substantially discount those timings. On reflection, I suppose the counterargument is that this makes the measured doubling times look more impressive, because (plausibly) if we look at a pair of tasks that take low-context people 10 and 20 minutes respectively, the time ratio for realistically high-context people might be more than 2x. But I could imagine this playing out in other ways as well (e.g. maybe we aren’t yet looking at task sizes where people have time to absorb a significant amount of context, and so as the models climb from 1 to 4 to 16 to 64 minute tasks, the humans they’re being compared against aren’t yet benefiting from context-learning effects).
One always wishes for more data – in this case, more measurements of human task completion times with high and low context, on more problem types and a wider range of time horizons...
Surely if AIs were completing 1 month long self contained software engineering tasks (e.g. what a smart intern might do in the first month) that would be a big update toward the plausiblity of AGI within a few years!
Agreed. But that means time from today to AGI is the sum of:
Time for task horizons to increase from 1.5 hours (the preliminary o3 result) to 1 month
Plausibly “a few years” to progress from 1-month-coder to AGI.
If we take the midpoint of Thomas Kwa’s “3-4 months” guess for subsequent doubling time, we get 23.8 months for (1). If we take “a few years” to be 2 years, we’re in 2029, which is farther out than “the most aggressive forecasts” (e.g. various statements by Dario Amodei, or the left side of the probability distribution in AI 2027).
And given the starting assumptions, those are fairly aggressive numbers. Thomas’ guess that “capability on more realistic tasks will follow the long-term 7-month doubling time” would push this out another two years, and one could propose longer timelines from one-month-coder to AGI.
Of course this is not proof of anything – for instance, task horizon doubling times could continue to accelerate, as envisioned in AI 2027 (IIRC), and one could also propose shorter timelines from one-month-coder to AGI. But I think the original statement is fair, even if we use 3-4 months as the doubling time, this is an update away from “the most aggressive forecasts”?
(When I wrote this, I was primarily thinking about Dario projecting imminent geniuses-in-a-data-center, and similar claims that AGI is coming within the next couple of years or even is already here.)
To be clear, I agree that the bad interpretations were not coming from METR.
Interpreting the METR Time Horizons Post
Thanks, I’ve edited the post to note this.
Sure – I was presenting these as “human-only, software-only” estimates:
Here are the median estimates of the “human-only, software-only” time needed to reach each milestone:
So it doesn’t seem like there’s a problem here?
I added up the median “Predictions for gap size” in the “How fast can the task difficulty gaps be crossed?” table, summing each set of predictions separately (“Eli”, “Nikola”, “FutureSearch”) to get three numbers ranging from 30-75.
Does this table cover the time between now and superhuman coder? I thought it started at RE-Bench, because:
I took all of this to be in context of the phrase, about one page back, “For each gap after RE-Bench saturation”
The earlier explanation that Method 2 is “a more complex model starting from a forecast saturation of an AI R&D benchmark (RE-Bench), and then how long it will take to go from that system to one that can handle real-world tasks at the best AGI company” [emphasis added]
The first entry in the table (“Time horizon: Achieving tasks that take humans lots of time”) sounds more difficult than saturating RE-Bench.
Earlier, there’s a separate discussion forecasting time to RE-bench saturation.
But sounds like I was misinterpreting?
Correct. Am I wrong in thinking that it’s usual to use the word “timelines” to refer to the entire arc of AI progress, including both the periods covered in the “Timelines Forecast” and “Takeoff Forecast”? But, since this is all in the context of AI 2027 I should have clarified.
What’s your basis for “well-defined tasks” vs. “realistic tasks” to have very different doubling times going forward? Is the idea that the recent acceleration seems to be specifically due to RL, and RL will be applicable to well-defined tasks but not realistic tasks?
This seems like an extremely important question, so if you have any further thoughts / intuitions / data to share, I’d be very interested.
Thanks everyone for all the feedback and answers to my unending questions! The branching comments are starting to become too much to handle, so I’m going to take a breather and then write a followup post – hopefully by the end of the week but we’ll see – in which I’ll share some consolidated thoughts on the new (to me) ideas that surfaced here and also respond to some specific points.
Thanks.
I’m now very strongly feeling the need to explore the question of what sorts of activities go into creating better models, what sorts of expertise are needed, and how that might change as things move forward. Which unfortunately I know ~nothing about, so I’ll have to find some folks who are willing to let me pick their brains...
Thanks! I agree that my statements about Amdahl’s Law primarily hinge on my misunderstanding of the milestones, as elucidated in the back-and-forth with Ryan. I need to digest that; as Ryan anticipates, possibly I’ll wind up with thoughts worth sharing regarding the “human-only, software-only” time estimates, especially for the earlier stages, but it’ll take me some time to chew on that.
(As a minor point of feedback, I’d suggest adding a bit of material near the top of the timelines and/or takeoff forecasts, clarifying the range of activities meant to be included in “superhuman coder” and “superhuman AI researcher”, e.g. listing some activities that are and are not in scope. I was startled to see Ryan say “my sense is that an SAR has to be better than humans at basically everything except vision”; I would never have guessed that was the intended interpretation.)
I’ve (briefly) addressed the compute bottleneck question on a different comment branch, and “hard-to-automate activities aren’t a problem” on another (confusion regarding the definition of various milestones).
[Dependence on Narrow Data Sets] is only applicable to the timeline to the superhuman coder milestone, not to takeoff speeds once we have a superhuman coder. (Or maybe you think a similar argument applies to the time between superhuman coder and SAR.)
I do think it applies, if indirectly. Most data relating to progress in AI capabilities comes from benchmarks of crisply encapsulated tasks. I worry this may skew our collective intuitions regarding progress toward broader capabilities, especially as I haven’t seen much attention paid to exploring the delta between things we currently benchmark and “everything”.
Hofstadter’s Law As Prior
Math: We’re talking about speed up relative to what the human researchers would have done by default, so this just divides both sides equally and cancels out.
This feels like one of those “the difference between theory and practice is smaller in theory than in practice” situations… Hofstadter’s Law would imply that Hofstadter’s Law applies here. :-)
For one concrete example of how that could manifest, perhaps there is a delay between “AI models exist that are superhuman at all activities involved in developing better models” and “those models have been fully adopted across the organization”. Interior to a frontier lab, that specific delay might be immaterial, it’s just meant as an existence proof that there’s room for us to be missing things.
I think my short, narrowly technical response to this would be “agreed”.
Additional thoughts, which I would love your perspective on:
1. I feel like the idea that human activities involved in creating better models are broader than just, like, stereotypical things an ML Ph.D would do, is under-explored. Elsewhere in this thread you say “my sense is that an SAR has to be better than humans at basically everything except vision.” There’s a lot to unpack there, and I don’t think I’ve seen it discussed anywhere, including in AI 2027. Do stereotypical things an ML Ph.D would do constitute 95% of the work? 50%? Less? Does the rest of the work mostly consist of other sorts of narrowly technical software work (coding, distributed systems design, etc.), or is there broad spillover into other areas of expertise, including non-STEM expertise? What does that look like? Etc.
(I try to make this point a lot, generally don’t get much acknowledgement, and as a result have started to feel a bit like a crazy person. I appreciate you giving some validation to the idea. Please let me know if you suspect I’ve over-interpreted that validation.)
1a. Why “except vision”? Does an SAR have to be superhuman at creative writing, so that it can push forward creative writing capabilities in future models? (Obviously, substitute any number of other expertise domains for “creative writing”.) If yes, then why doesn’t it also need to be superhuman at vision (so that it can push forward vision capabilities)? If no, then presumably creative writing is one of the exceptions implied by the “basically” qualifier, what else falls in there?
2. “Superhuman AI researcher” feels like a very bad term for a system that is meant to be superhuman at the full range of activities involved in producing better models. It strongly suggests a narrower set of capabilities, thus making it hard to hold onto the idea that a broad definition is intended. Less critically, it also seems worthwhile to better define what is meant to fall within the umbrella of “superhuman coder”.
3. As I read through AI 2027 and then wrote my post here, I was confused as to the breadth of skills meant to be implied by “superhuman coder” and (especially) “superhuman AI researcher”, and probably did not maintain a consistent definition in my head, which may have confused my thinking.
4. I didn’t spend much time evaluating the reasoning behind the estimated speedups at each milestone (5x, 25x, 250x, 2000x). I might have more to say after digging into that. If/when I find the time, that, plus the discussion we’ve just had here, might be enough grist for a followup post.
We now have several branches going, I’m going to consolidate most of my response in just one branch since they’re converting onto similar questions anyway. Here, I’ll just address this:
But, when considering activities that aren’t bottlenecked on the environment, then to achieve 10x acceleration you just need 10 more speed at the same level of capability.
I’m imagining that, at some intermediate stages of development, there will be skills for which AI does not even match human capability (for the relevant humans), and its outputs are of unusably low quality.
Thanks. This is helpful, but my intuition is substantially coming from the idea that there might be other factors involved (activities / processes involved in improving models that aren’t “thinking about algorithms”, “writing code”, or “analyzing data”). In other words, I have a fair amount of model uncertainty, especially when thinking about very large speedups.