I’m kind of surprised that the post doesn’t mention the other, larger discontinuities that they’ve found: nuclear weapons, high-temperature superconduction, and building height.
Plus, it has been argued that the next AI winter is well on its way, i.e. we actually start to see a decline, not a further increase, of interest in AI.
Metaculus has the closest thing to a prediction market on this topic that I’m aware of, which is worth looking at.
Unfortunately, interpreting expert opinion is tricky. On the one hand, in some surveys machine learning researchers put non-negligible probability on “human-level intelligence” (whatever that means) in 10 years. On the other hand, my impression from interacting with the community is that the predominant opinion is still to confidently dismiss a short timeline scenario, to the point of not even seriously engaging with it.
The linked survey is the most comprehensive survey that I’m aware of, and it points to the ML community collectively putting ~10% chance on HLAI in 10 years. I think that if I thought that one should defer to expert opinion, I would put a lot of weight on this survey and very little on the interactions that the author of this piece has had. That being said, the survey also (in my view) shows that the ML community is not that great at prediction.
All in all, my main disagreement with this post is about the level of progress that we’ve seen and are likely to see. It seems like ML has been steadily gaining a bunch of relevant capacities, and that the field has a lot of researchers capable of bringing the field forward both through incremental and fundamental research. The author implicitly thinks that this is nowhere near enough for AGI in 10 years, my broad judgement is that it makes that achievement not unthinkable, but it’s hard to fully lay out the relevant reasons for that judgement.
I’m kind of surprised that the post doesn’t mention the other, larger discontinuities that they’ve found: nuclear weapons, high-temperature superconduction, and building height.
Metaculus has the closest thing to a prediction market on this topic that I’m aware of, which is worth looking at.
The linked survey is the most comprehensive survey that I’m aware of, and it points to the ML community collectively putting ~10% chance on HLAI in 10 years. I think that if I thought that one should defer to expert opinion, I would put a lot of weight on this survey and very little on the interactions that the author of this piece has had. That being said, the survey also (in my view) shows that the ML community is not that great at prediction.
All in all, my main disagreement with this post is about the level of progress that we’ve seen and are likely to see. It seems like ML has been steadily gaining a bunch of relevant capacities, and that the field has a lot of researchers capable of bringing the field forward both through incremental and fundamental research. The author implicitly thinks that this is nowhere near enough for AGI in 10 years, my broad judgement is that it makes that achievement not unthinkable, but it’s hard to fully lay out the relevant reasons for that judgement.