2023 in AI predictions

Lots of people have made AI predictions in 2023. Here I compile a subset. I have a habit of setting an email reminder for the date of the prediction, when I see AI predictions, so that when they are resolved I can point out their accuracy or inaccuracy. I have compiled most of the email reminders from 2023 in chronological format (predictions with an early to late target date). I’m planning to make these posts yearly, checking in on predictions whose date has expired. Feel free to add more references to predictions made in 2023 to the comments.

In some cases people are referring to the predictions of others in a way that could be taken to imply that they agree. This is not a certain interpretation, but I’m including them for the sake of completeness.

March 2024

the gears to ascension: “Hard problem of alignment is going to hit us like a train in 3 to 12 months at the same time some specific capabilities breakthroughs people have been working on for the entire history of ML finally start working now that they have a weak AGI to apply to, and suddenly critch’s stuff becomes super duper important to understand.”

October 2024

John Pressman: “6-12 month prediction (80%): The alignment problem as the core of AI X-Risk will become a historical artifact as it’s largely solved or on track to being solved in the eyes of most parties and arguments increasingly become about competition and misuse. Few switch sides.”

July 2025

Jessica Taylor: “Wouldn’t be surprised if this exact prompt got solved, but probably something nearby that’s easy for humans won’t be solved?”

The prompt: “Find a sequence of words that is: − 20 words long—contains exactly 2 repetitions of the same word twice in a row—contains exactly 2 repetitions of the same word thrice in a row”

(note: thread contains variations and a harder problem.)

November 2026

Max Tegmark: “It’s crazy how the time left to weak AGI has plummeted from 20 years to 3 in just 18 months on http://​metaculus.com. So you better stop calling AGI a ‘long-term’ possibility, or someone might call you a dinosaur stuck in the past”

The Metaculus question.

Siqi Chen: “what it means is within 3 years you will either be dead or have a god as a servant”.

Elon Musk: “If you say ‘smarter than the smartest human at anything’? It may not quite smarter than all humans—or machine-augmented humans, because, you know, we have computers and stuff, so there’s a higher bar… but if you mean, it can write a novel as good as JK Rowling, or discover new physics, invent new technology? I would say we are less than 3 years from that point.”

December 2026

Jai Bhavnani: “Baseline expectation: 90%+ of smart contracts will get exploited in the next 3 years. These exploits will be found by AIs. We need solutions.”

October 2028

Stuart Russell: “Everyone has gone from 30-50 years, to 3-5 years.”

November 2028

Tammy: “when i say ‘we have approximately between 0 and 5 years’ people keep thinking that i’m saying ‘we have approximately 5 years’. we do not have approximately 5 years. i fucking wish. we have approximately between 0 and 5 years. we could actually all die of AI next month.”

December 2028

Tyler John: “Yep. If discontinuous leaps in AI capabilities are 3-5 years away we should probably start to think a little bit about how to prepare for that. The EU AI Act has been in development for 5 years and still isn’t passed yet. We just can’t take the wait and see approach any longer.”

Mustafa Stuleyman: “[Current models have already] … arguably passed the Turing Test. I’ve proposed a test which involves [AIs] going off and taking $100,000 investment, and over the course of three months, try to set about creating a new product, researching the market, seeing what consumers might like, generating some new images, some blueprints of how to manufacture that product, contacting a manufacturer, getting it made, negotiating the price, dropshipping it, and then ultimately collecting the revenue. And I think that over a 5 year period, it’s quite likely that we will have an ACI, an artificial capable intelligence that can do the majority of those tasks autonomously. It will be able to make phone calls to other humans to negotiate. It will be able to call other AIs in order to establish the right sequence in a supply chain, for example.”

Aleph: “when my AGI timeline was 30-50 years vs when it became like 5 years”

2030

Jacob Steinhardt: “I’ll refer throughout to ‘GPT-2030’, a hypothetical system that has the capabilities, computational resources, and inference speed that we’d project for large language models in 2030 (but which was likely trained on other modalities as well, such as images)… I expect GPT-2030 to have superhuman coding, hacking, and mathematical abilities… I personally expect GPT-2030 to be better than most professional mathematicians at proving well-posed theorems...Concretely, I’d assign 50% probability to the following: ‘If we take 5 randomly selected theorem statements from the Electronic Journal of Combinatorics and give them to the math faculty at UCSD, GPT-2030 would solve a larger fraction of problems than the median faculty and have a shorter-than-median solve time on the ones that it does solve’.”

(The post contains other, more detailed predictions).

December 2033

Roko Mijic: “No, Robin, it won’t take millions of years for AIs to completely outperform humans on all tasks, it’ll take about 10 years”

December 2034

Eliezer Yudkowsky: “When was the last human being born who’d ever grow into being employable at intellectual labor? 2016? 2020?”

(note: I’m calculating 2016+18 on the assumption that some 18 year olds are employable in intellectual labor, but there’s room for uncertainty regarding what time range this is referring to; we could also compute 2020+14 on the assumption that some 14 year olds can be employed in intellectual labor, so I’m taking a rough median here)

December 2035

Multiple people: “STEM+ AI will exist by the year 2035.” (range of predictions, many >=50%, some >=90%).

Definition: “Let ‘STEM+ AI’ be short for ‘AI that’s better at STEM research than the best human scientists (in addition to perhaps having other skills)’”

See the Twitter/​X thread also.

October 2040

Eliezer Yudkowsky: “Who can possibly still imagine a world where a child born today goes to college 17 years later?”

2043

Ted Sanders: “Transformative AGI by 2043 is less than 1% likely.”

Brian Chau: “AI progress in general is slowing down or close to slowing down. AGI is unlikely to be reached in the near future (in my view <5% by 2043). Economic forecasts of AI impacts should assume that AI capabilities are relatively close to the current day capabilities.”

2075

Tsvi: “Median 2075ish. IDK. This would be further out if an AI winter seemed more likely, but LLMs seem like they should already be able to make a lot of money.” (for when AGI comes)

2123

Andrew Ng:

As a hypothetical example of a calculation to estimate the risk of AI “taking over” and causing human extinction over the next 100 years, a plausible scenario might be:

  • One of the world’s top AI systems goes rogue, is ‘misaligned’ and either deliberately or accidentally picks a goal that involves wiping out humanity… My estimate: less than 0.1% chance (or 1/​1000).

  • Other leading AI systems do not identify the rogue actor and raise the alarm/​act to stop the rogue one… My estimate: less than 1% chance (or 1100).

  • This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity… My estimate: less than 1% chance (or 1100).

If we multiply these numbers together, we end up with 0.1% * 1% * 1% = 110,000,000 chance

Longer

Robin Hanson: “Centuries” (regarding time until AGI eclipses humans at almost all tasks).