I firmly believe that the OP’s author should have reduced the uncertainty at least to a Lifland-like estimate. Additionally, I struggle to understand most constraints related to broad timelines. Whatever the timelines are, our endgoal is to ensure that the ASI is either never created or aligned and aligned not to a dystopia. Preventing a misaligned ASI requires a leverage at least over actors as reckless as xAI, and preventing Intelligence Curse-like outcomes or AI-enabled dictatorships requires some influence over power struggles. Such an influence requires us to ensure that politicians occupying positions of power act to prevent risks, not to do things like destroying Anthropic for a refusal to participate in mass survelliance. But I don’t see any pathways except for infecting politicians with the right memes (think of IABIED’s attempt to flood politicians with calls, letters and e-mails or of the IABIED march) and placing infected people into higher-level positions.
Moreover, Kokotajlo’s timeline implies a 50% chance of TED-AI before Jan 2031 or before Oct 2032, Eli’s timeline implies a 50% chance of TED-AI before Feb 2035 or Apr 2036. Taken at face value, these estimates mean that p(TED-AI is created within the next 10 years) is around 50% (or, in Kokotajlo’s case, 62% or outright 73%), making a project requiring 20 years to be completed unlikely to have an effect.
I firmly believe that the OP’s author should have reduced the uncertainty at least to a Lifland-like estimate.
Moreover, Kokotajlo’s timeline implies a 50% chance of TED-AI before Jan 2031 or before Oct 2032, Eli’s timeline implies a 50% chance of TED-AI before Feb 2035 or Apr 2036.
IDK what you mean by “TED-AI” but, in case you haven’t noticed, Ord’s median seems to be 2038, which is like 2 or 3 years later than Lifland.
I think everyone should have a distribution that is roughly this shape. Here’s mine:
TED-AI is defined by Kokotajlo-Lifland as Top Expert Dominating AI. However, I struggle to understand the origins of @Toby_Ord’s distribution. I suspect that his sources for longer timelines are as hardto rely on as is Cotra’s heavily criticized estimate or the fact that “all the revenue growth in the industry has corresponded to a scaling up of the supply of inference compute so that revenue per H100 equivalent has remained fairly constant.” Unlike things like the Epoch Capabilities Index as dependent on training compute or the ARC-AGI leaderboard per money spent (which might imply that no possible CoT-based system is far more effective at ARC-AGI than Gemini 3 Flash and Gemini-3.1 Pro), Ergil’s argument doesn’t actually claim anything about capabilities of AI systems which don’t even exist yet.
I firmly believe that the OP’s author should have reduced the uncertainty at least to a Lifland-like estimate. Additionally, I struggle to understand most constraints related to broad timelines. Whatever the timelines are, our endgoal is to ensure that the ASI is either never created or aligned and aligned not to a dystopia. Preventing a misaligned ASI requires a leverage at least over actors as reckless as xAI, and preventing Intelligence Curse-like outcomes or AI-enabled dictatorships requires some influence over power struggles. Such an influence requires us to ensure that politicians occupying positions of power act to prevent risks, not to do things like destroying Anthropic for a refusal to participate in mass survelliance. But I don’t see any pathways except for infecting politicians with the right memes (think of IABIED’s attempt to flood politicians with calls, letters and e-mails or of the IABIED march) and placing infected people into higher-level positions.
Moreover, Kokotajlo’s timeline implies a 50% chance of TED-AI before Jan 2031 or before Oct 2032, Eli’s timeline implies a 50% chance of TED-AI before Feb 2035 or Apr 2036. Taken at face value, these estimates mean that p(TED-AI is created within the next 10 years) is around 50% (or, in Kokotajlo’s case, 62% or outright 73%), making a project requiring 20 years to be completed unlikely to have an effect.
IDK what you mean by “TED-AI” but, in case you haven’t noticed, Ord’s median seems to be 2038, which is like 2 or 3 years later than Lifland.
TED-AI is defined by Kokotajlo-Lifland as Top Expert Dominating AI. However, I struggle to understand the origins of @Toby_Ord’s distribution. I suspect that his sources for longer timelines are as hard to rely on as is Cotra’s heavily criticized estimate or the fact that “all the revenue growth in the industry has corresponded to a scaling up of the supply of inference compute so that revenue per H100 equivalent has remained fairly constant.” Unlike things like the Epoch Capabilities Index as dependent on training compute or the ARC-AGI leaderboard per money spent (which might imply that no possible CoT-based system is far more effective at ARC-AGI than Gemini 3 Flash and Gemini-3.1 Pro), Ergil’s argument doesn’t actually claim anything about capabilities of AI systems which don’t even exist yet.