The main thing that would predict slower takeoff is if early AGI systems turn out to be extremely computationally expensive.
Surely that’s only under the assumption that Eliezer’s conception of AGI (simple general optimisation algorithm) is right, and Robin’s (very many separate modules comprising a big intricate system) is wrong? Is it just that you think that assumption is pretty certain to be right? Or, are you saying that even under the Hansonian model of AI, we’d still get a FOOM anyway?
While I find Robin’s model more convincing than Eliezer’s, I’m still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussed recently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn’t say much.