I do remember thinking that the predictions in What 2026 Looks Like weren’t as wild to insiders as they were to everyone else. Like, various people I knew at the time at Anthropic and OpenAI were like “Great post, super helpful, seems about right to me.”
However, I also think that AI 2027 is more… toned down? Sharp edges rounded off? Juicy stuff taken out? compared to What 2026 Looks Like, because it underwent more scrutiny and because we had limited space, and because we had multiple authors. Lots of subplots were deleted, lots of cute and cool ideas were deleted.
My guess is that the answer to your question is 2/3rds “You have learned more about AI compared to what you knew in 2021″ and 1/3rd “AI 2027 is a bit more conservative/cautious than W2026LL”
Another thing though: In an important sense, AI 2027 feels more speculative to me now than W2026LL did at the time of writing. This is because AI 2027 is trying to predict something inherently more difficult to predict. W2026LL was trying to predict pretty business-as-usual AI capabilities growth trends and the effects they would have on society. AI 2027 is doing that… for about two years, then the intelligence explosion starts and things go wild. I feel like if AI 2027 looks as accurate in 2029 as W2026LL looks now, that’ll be a huge fucking achievement, because it is attempting to forecast over more unknowns so to speak.
To what extent do you think your alpha here was in making unusually good predictions, vs. in paying attention to the correct things at a time when no-one focused on them, then making fairly basic predictions/extrapolations?
In my experience, the best way to make unusually good predictions is to pay attention to the correct things at a time when no one is focusing on them, and then make fairly basic extrapolations/predictions. (How else would you do it?)
Great question!
I do remember thinking that the predictions in What 2026 Looks Like weren’t as wild to insiders as they were to everyone else. Like, various people I knew at the time at Anthropic and OpenAI were like “Great post, super helpful, seems about right to me.”
However, I also think that AI 2027 is more… toned down? Sharp edges rounded off? Juicy stuff taken out? compared to What 2026 Looks Like, because it underwent more scrutiny and because we had limited space, and because we had multiple authors. Lots of subplots were deleted, lots of cute and cool ideas were deleted.
My guess is that the answer to your question is 2/3rds “You have learned more about AI compared to what you knew in 2021″ and 1/3rd “AI 2027 is a bit more conservative/cautious than W2026LL”
Another thing though: In an important sense, AI 2027 feels more speculative to me now than W2026LL did at the time of writing. This is because AI 2027 is trying to predict something inherently more difficult to predict. W2026LL was trying to predict pretty business-as-usual AI capabilities growth trends and the effects they would have on society. AI 2027 is doing that… for about two years, then the intelligence explosion starts and things go wild. I feel like if AI 2027 looks as accurate in 2029 as W2026LL looks now, that’ll be a huge fucking achievement, because it is attempting to forecast over more unknowns so to speak.
In my experience, the best way to make unusually good predictions is to pay attention to the correct things at a time when no one is focusing on them, and then make fairly basic extrapolations/predictions. (How else would you do it?)