(disclosure, Lightcone did a lot of work on the website of this project, although I was only briefly involved)
Like others have said, I appreciate this for both having a lot of research behind it, and for laying out something concrete enough to visualize and disagree with. Debating individual “event X will happen” predictions isn’t exactly the point, since some of them are merely illustrative of “something similar that might happen.” But, it’s helpful for debating underlying models about what sort-of-events are likely to happen.
One of the central, obvious debates here is “does it actually make sense to just extrapolate the trends the way this way, or is AGI takeoff dependent on some unrelated progress?”. Recent posts like A Bear Case and Have LLMs Generated Novel Insights?[1] have argued the opposite view). I lean towards “the obvious trends will continue and the obvious AGI approaches will basically work”, but only put it at bit over 50%. I think it’s reasonable to have a lower credence there. But one thought I’ve had this week is: perhaps longer-time-folk (with some credence on this) should to spend the next year-or-so focusing more on plans that help in short-timeline worlds, and then return to longer time-horizon plans if a year from now, it seems like progress has slowed and there’s some missing sauce.[2]
I think it would have been nicer if a third scenario was presented – I think the current two-scenario setup comes across as more of a rhetorical device, i.e. “if y’all don’t change your actions you will end up on the doomy racing scenario.” I believe Daniel-et-al that that wasn’t their intent, but I think a third scenario that highlighted some orthogonal axis of concern would have been helpful for getting people into the mindset of actually “rolling the simulation forward” rather than picking and arguing for a side.
Notably, written before AI 2027 came out, although I think they were reacting to an intellectual scene that was nontrivially informed by earlier drafts of it.
On the other hand, if most of your probability-mass is on mediumish timelines, and you have a mainline plan you think you could barely pull off in 10 years, such that taking a year off seems likely to make the difference,
Curated. I’ve been following this project for awhile (you can see some of the earlier process in Daniel’s review of his “What 2026 looks like” post, and on his comment on Tom Davidon’s What a Compute-centric framework says about AI takeoff). I’ve participated in one of the wargames that helped inform what sort of non-obvious things might happen along the path of AI takeoff.
(disclosure, Lightcone did a lot of work on the website of this project, although I was only briefly involved)
Like others have said, I appreciate this for both having a lot of research behind it, and for laying out something concrete enough to visualize and disagree with. Debating individual “event X will happen” predictions isn’t exactly the point, since some of them are merely illustrative of “something similar that might happen.” But, it’s helpful for debating underlying models about what sort-of-events are likely to happen.
One of the central, obvious debates here is “does it actually make sense to just extrapolate the trends the way this way, or is AGI takeoff dependent on some unrelated progress?”. Recent posts like A Bear Case and Have LLMs Generated Novel Insights?[1] have argued the opposite view). I lean towards “the obvious trends will continue and the obvious AGI approaches will basically work”, but only put it at bit over 50%. I think it’s reasonable to have a lower credence there. But one thought I’ve had this week is: perhaps longer-time-folk (with some credence on this) should to spend the next year-or-so focusing more on plans that help in short-timeline worlds, and then return to longer time-horizon plans if a year from now, it seems like progress has slowed and there’s some missing sauce.[2]
I think it would have been nicer if a third scenario was presented – I think the current two-scenario setup comes across as more of a rhetorical device, i.e. “if y’all don’t change your actions you will end up on the doomy racing scenario.” I believe Daniel-et-al that that wasn’t their intent, but I think a third scenario that highlighted some orthogonal axis of concern would have been helpful for getting people into the mindset of actually “rolling the simulation forward” rather than picking and arguing for a side.
Notably, written before AI 2027 came out, although I think they were reacting to an intellectual scene that was nontrivially informed by earlier drafts of it.
On the other hand, if most of your probability-mass is on mediumish timelines, and you have a mainline plan you think you could barely pull off in 10 years, such that taking a year off seems likely to make the difference,