Trying to evaluate this forecast in order to figure out how update on the newer one.
It certainly reads as surprisingly prescient. Notably, it predicts both the successes and the failures of the LLM paradigm: the ongoing discussion regarding how “shallow” or not their understanding is, the emergence of the reasoning paradigm, the complicated LLM bureaucracies/scaffolds, lots of investment in LLM-wrapper apps which don’t quite work, the relative lull of progress in 2024, troubles with agency and with generating new ideas, “scary AI” demos being dismissed because LLMs do all kinds of whimsical bullshit...
And it was written in the base-GPT-3 era, before ChatGPT, before even the Instruct models. I know I couldn’t have come close to calling any of this back then. Pretty wild stuff.
In comparison, the new “AI 2027” scenario is very… ordinary. Nothing that’s in it is surprising to me, it’s indeed the “default” “nothing new happens” scenario in many ways.
But perhaps the difference is in the eye of the beholder. Back in 2021, I barely knew how DL worked, forget being well-versed in deep LLM lore. The real question is, if I had been as immersed in the DL discourse in 2021 as I am now, would this counterfactual 2021!Thane have considered this forecast as standard as the AI 2027 forecast seems to 2025!Thane?
More broadly: “AI 2027” seems like the reflection of the default predictions regarding AI progress in certain well-informed circles/subcultures. Those circles/subcultures are fairly broad nowadays; e. g., significant parts of the whole AI Twitter. Back in 2021, the AI subculture was much smaller… But was there, similarly, an obviously maximally-well-informed fraction of that subculture which would’ve considered “What 2026 Looks Like” the somewhat-boring default prediction?
Reframing: @Daniel Kokotajlo, do you recall how wildly speculative you considered “What 2026 Looks Like” at the time of writing, and whether it’s more or less speculative than “AI 2027″ feels to you now? (And perhaps the speculativeness levels of the pre-2027 and post-2027 parts of the “AI 2027” report should be evaluated separately here.)
Another reframing: To what extent do you think your alpha here was in making unusually good predictions, vs. in paying attention to the correct things at a time when no-one focused on them, then making fairly basic predictions/extrapolations? (Which is important for evaluating how much your forecasts should be expected to “beat the (prediction) market” today, now that (some parts of) that market are paying attention to the right things as well.)
I do remember thinking that the predictions in What 2026 Looks Like weren’t as wild to insiders as they were to everyone else. Like, various people I knew at the time at Anthropic and OpenAI were like “Great post, super helpful, seems about right to me.”
However, I also think that AI 2027 is more… toned down? Sharp edges rounded off? Juicy stuff taken out? compared to What 2026 Looks Like, because it underwent more scrutiny and because we had limited space, and because we had multiple authors. Lots of subplots were deleted, lots of cute and cool ideas were deleted.
My guess is that the answer to your question is 2/3rds “You have learned more about AI compared to what you knew in 2021″ and 1/3rd “AI 2027 is a bit more conservative/cautious than W2026LL”
Another thing though: In an important sense, AI 2027 feels more speculative to me now than W2026LL did at the time of writing. This is because AI 2027 is trying to predict something inherently more difficult to predict. W2026LL was trying to predict pretty business-as-usual AI capabilities growth trends and the effects they would have on society. AI 2027 is doing that… for about two years, then the intelligence explosion starts and things go wild. I feel like if AI 2027 looks as accurate in 2029 as W2026LL looks now, that’ll be a huge fucking achievement, because it is attempting to forecast over more unknowns so to speak.
To what extent do you think your alpha here was in making unusually good predictions, vs. in paying attention to the correct things at a time when no-one focused on them, then making fairly basic predictions/extrapolations?
In my experience, the best way to make unusually good predictions is to pay attention to the correct things at a time when no one is focusing on them, and then make fairly basic extrapolations/predictions. (How else would you do it?)
Trying to evaluate this forecast in order to figure out how update on the newer one.
It certainly reads as surprisingly prescient. Notably, it predicts both the successes and the failures of the LLM paradigm: the ongoing discussion regarding how “shallow” or not their understanding is, the emergence of the reasoning paradigm, the complicated LLM bureaucracies/scaffolds, lots of investment in LLM-wrapper apps which don’t quite work, the relative lull of progress in 2024, troubles with agency and with generating new ideas, “scary AI” demos being dismissed because LLMs do all kinds of whimsical bullshit...
And it was written in the base-GPT-3 era, before ChatGPT, before even the Instruct models. I know I couldn’t have come close to calling any of this back then. Pretty wild stuff.
In comparison, the new “AI 2027” scenario is very… ordinary. Nothing that’s in it is surprising to me, it’s indeed the “default” “nothing new happens” scenario in many ways.
But perhaps the difference is in the eye of the beholder. Back in 2021, I barely knew how DL worked, forget being well-versed in deep LLM lore. The real question is, if I had been as immersed in the DL discourse in 2021 as I am now, would this counterfactual 2021!Thane have considered this forecast as standard as the AI 2027 forecast seems to 2025!Thane?
More broadly: “AI 2027” seems like the reflection of the default predictions regarding AI progress in certain well-informed circles/subcultures. Those circles/subcultures are fairly broad nowadays; e. g., significant parts of the whole AI Twitter. Back in 2021, the AI subculture was much smaller… But was there, similarly, an obviously maximally-well-informed fraction of that subculture which would’ve considered “What 2026 Looks Like” the somewhat-boring default prediction?
Reframing: @Daniel Kokotajlo, do you recall how wildly speculative you considered “What 2026 Looks Like” at the time of writing, and whether it’s more or less speculative than “AI 2027″ feels to you now? (And perhaps the speculativeness levels of the pre-2027 and post-2027 parts of the “AI 2027” report should be evaluated separately here.)
Another reframing: To what extent do you think your alpha here was in making unusually good predictions, vs. in paying attention to the correct things at a time when no-one focused on them, then making fairly basic predictions/extrapolations? (Which is important for evaluating how much your forecasts should be expected to “beat the (prediction) market” today, now that (some parts of) that market are paying attention to the right things as well.)
Great question!
I do remember thinking that the predictions in What 2026 Looks Like weren’t as wild to insiders as they were to everyone else. Like, various people I knew at the time at Anthropic and OpenAI were like “Great post, super helpful, seems about right to me.”
However, I also think that AI 2027 is more… toned down? Sharp edges rounded off? Juicy stuff taken out? compared to What 2026 Looks Like, because it underwent more scrutiny and because we had limited space, and because we had multiple authors. Lots of subplots were deleted, lots of cute and cool ideas were deleted.
My guess is that the answer to your question is 2/3rds “You have learned more about AI compared to what you knew in 2021″ and 1/3rd “AI 2027 is a bit more conservative/cautious than W2026LL”
Another thing though: In an important sense, AI 2027 feels more speculative to me now than W2026LL did at the time of writing. This is because AI 2027 is trying to predict something inherently more difficult to predict. W2026LL was trying to predict pretty business-as-usual AI capabilities growth trends and the effects they would have on society. AI 2027 is doing that… for about two years, then the intelligence explosion starts and things go wild. I feel like if AI 2027 looks as accurate in 2029 as W2026LL looks now, that’ll be a huge fucking achievement, because it is attempting to forecast over more unknowns so to speak.
In my experience, the best way to make unusually good predictions is to pay attention to the correct things at a time when no one is focusing on them, and then make fairly basic extrapolations/predictions. (How else would you do it?)