Thanks for writing this up! I also want to register that I agree with all of this, maybe except for the part where AIs can’t tell novel funny jokes—I expect this to be relatively easy. But of coursre it depends on the definition of ‘novel’.
I struggled to do this exercise myself because when I looked at AI as a normal technology I felt like I basically agree with most of their thinking, but it was also hard to find concrete differences between their predictions and AI2027 at least in the near term. For example, for things like “LLMs are broadly acknowledged to be plateauing”, it’s probably going to be concurrently both true and false in a way that’s hard to resolve—a lot of people may complain that it’s plateauing but the benchmark scores and the usage stats could show otherwise.
It’s funny everyone is doubting the funny jokes part. I view funny jokes as computationally hard to generate, probably because I’ve sat down and actually tried, and it doesn’t seem fundamentally easier than coming up with brilliant essay ideas or whatever. But most people just have experience telling jokes in the moment, which is a different kind of non-deep activity. Maybe AI will be better at that, but not so good at e.g. writing an hour of stand-up comedy material that’s truly brilliant?
For example, for things like “LLMs are broadly acknowledged to be plateauing”, it’s probably going to be concurrently both true and false in a way that’s hard to resolve
Yes, this is somewhat ambiguous I admit. I’m kind of fine with that though. I’m not placing any bets, I’m just trying to record what I think is going to happen, and the uncertainty in the wording reflects my own uncertainty of what I think is going to happen.
Thanks for writing this up! I also want to register that I agree with all of this, maybe except for the part where AIs can’t tell novel funny jokes—I expect this to be relatively easy. But of coursre it depends on the definition of ‘novel’.
I struggled to do this exercise myself because when I looked at AI as a normal technology I felt like I basically agree with most of their thinking, but it was also hard to find concrete differences between their predictions and AI2027 at least in the near term. For example, for things like “LLMs are broadly acknowledged to be plateauing”, it’s probably going to be concurrently both true and false in a way that’s hard to resolve—a lot of people may complain that it’s plateauing but the benchmark scores and the usage stats could show otherwise.
It’s funny everyone is doubting the funny jokes part. I view funny jokes as computationally hard to generate, probably because I’ve sat down and actually tried, and it doesn’t seem fundamentally easier than coming up with brilliant essay ideas or whatever. But most people just have experience telling jokes in the moment, which is a different kind of non-deep activity. Maybe AI will be better at that, but not so good at e.g. writing an hour of stand-up comedy material that’s truly brilliant?
Yes, this is somewhat ambiguous I admit. I’m kind of fine with that though. I’m not placing any bets, I’m just trying to record what I think is going to happen, and the uncertainty in the wording reflects my own uncertainty of what I think is going to happen.