I mean of course it’s true today, right? It would be weird to make a prediction “AI can’t do XX in the future” (and that’s most of the predictions here) if that isn’t true today.
I just don’t think there is much to this prediction.
It takes a set of specific predictions, says none of it will happen, and by the nature of the conjunctive prediction, most will not happen. It would be more interesting to hear how AI will and will not progress rather than just denying an already unlikely to be perfect prediction.
Inevitably they’ll be wrong on some of these, but they’ll look more right on the surface level because they will be right on most of them.
AI progress can be rapid but the pathway to it may involve different capability unlocks. For example, it may be you automate work more broadly and then reinvest that into more compute/automate chipmaking itself). Or you can get the same unlocks without rapid progress. For example, you get a superhuman coder but run into different bottlenecks.
I think it’s pretty obvious AI progress won’t completely stall out, so I don’t think that’s the prediction you’re making? It’s one thing to say AI progress won’t be rapid and then give a specific story as to why. Later if you hit most of your marks, it’ll look like a much more valuable prediction than saying simply it won’t be rapid. (Same applies to AI 2027).
The authors of AI 2027 made a pretty specific story before the release of ChatGPT and looked really prescient after the fact since it turned out to be mostly accurate.
Most of my predictions are simply contradictions of the AI 2027 predictions, which are a well-regarded series of predictions for AI progress by the end of 2027. I am stating that I disagree and why.
It seems like basically everything in this is already true today. Not sure what you’re predicting here.
I mean of course it’s true today, right? It would be weird to make a prediction “AI can’t do XX in the future” (and that’s most of the predictions here) if that isn’t true today.
I just don’t think there is much to this prediction.
It takes a set of specific predictions, says none of it will happen, and by the nature of the conjunctive prediction, most will not happen. It would be more interesting to hear how AI will and will not progress rather than just denying an already unlikely to be perfect prediction.
Inevitably they’ll be wrong on some of these, but they’ll look more right on the surface level because they will be right on most of them.
If you think I’ll be right on most of these, then I think you disagree with the AI 2027 predictions.
AI progress can be rapid but the pathway to it may involve different capability unlocks. For example, it may be you automate work more broadly and then reinvest that into more compute/automate chipmaking itself). Or you can get the same unlocks without rapid progress. For example, you get a superhuman coder but run into different bottlenecks.
I think it’s pretty obvious AI progress won’t completely stall out, so I don’t think that’s the prediction you’re making? It’s one thing to say AI progress won’t be rapid and then give a specific story as to why. Later if you hit most of your marks, it’ll look like a much more valuable prediction than saying simply it won’t be rapid. (Same applies to AI 2027).
The authors of AI 2027 made a pretty specific story before the release of ChatGPT and looked really prescient after the fact since it turned out to be mostly accurate.
Most of my predictions are simply contradictions of the AI 2027 predictions, which are a well-regarded series of predictions for AI progress by the end of 2027. I am stating that I disagree and why.