Why do I expect the trend to be superexponential? Well, it seems like it sorta has to go superexponential eventually. Imagine: We’ve got to AIs that can with ~100% reliability do tasks that take professional humans 10 years. But somehow they can’t do tasks that take professional humans 160 years?
I don’t think this means the real thing has to go hyper-exponential, just that “how long does it take humans to do a thing?” is a good metric when AI is sub-human but a poor one when AI is superhuman.
If we had a metric “how many seconds/turn does a grandmaster have to think to beat the current best chess-playing AI”, it would go up at a nice steady rate until shortly after DeepBlue at which point it shoots to infinity. But if we had a true measurement of chess quality, we wouldn’t see any significant spike at the human-level.
I don’t think this means the real thing has to go hyper-exponential, just that “how long does it take humans to do a thing?” is a good metric when AI is sub-human but a poor one when AI is superhuman.
If we had a metric “how many seconds/turn does a grandmaster have to think to beat the current best chess-playing AI”, it would go up at a nice steady rate until shortly after DeepBlue at which point it shoots to infinity. But if we had a true measurement of chess quality, we wouldn’t see any significant spike at the human-level.