[Question] Greatest Lower Bound for AGI

(Note: I as­sume that the timeline be­tween AGI and an in­tel­li­gence ex­plo­sion is an or­der of mag­ni­tude shorter than the timeline be­tween now and the first AGI. There­fore, I might re­fer in­differ­ently to AGI/​in­tel­li­gence ex­plo­sion.)

Take a grad stu­dent de­cid­ing to do a PhD (~3-5y). The promise of an in­tel­li­gence ex­plo­sion in 10y might make him change his mind.

More gen­er­ally, es­ti­mat­ing a sci­en­tifi­cally sound in­fi­mum for AGI would fa­vor co­or­di­na­tion and clear think­ing.

My baselines for lower bounds on AGI have been op­ti­mists’ es­ti­mates. Ac­tu­ally, I stum­bled upon the con­cept of sin­gu­lar­ity through this doc­u­men­tary, where Ben Go­ertzel as­serts in 2009 that we can have a pos­i­tive sin­gu­lar­ity in 10 years “if the right amount of effort is ex­panded in the right di­rec­tion. If we re­ally re­ally try” (I later re­al­ized that he made some similar state­ment in 2006).

Ten years af­ter Go­ertzel’s state­ments, I’m still con­fused about how long it would take hu­man­ity to reach AGI in a con­text of global co­or­di­na­tion. This leads me to this post’s ques­tion:

Ac­cord­ing to your model, in which year will we reach a 1% prob­a­bil­ity of AGI (be­tween Jan­uary and De­cem­ber), and why?

I’m es­pe­cially cu­ri­ous about ar­gu­ments that don’t (only) rely on com­pute trends.


EDIT: First an­swers seem to agree on some value be­tween 2019 and 2021. This sur­prises me, as I think out­side of the AI Safety bub­ble, AI re­searchers would be re­ally sur­prised (less than 1% chance) to see AGI in less than 10 years.

I think my con­fu­sion about short timelines comes from the dis­so­nance be­tween es­ti­mates in AI Align­ment re­search and the in­tu­ition of top AI re­searchers. In par­tic­u­lar, I vividly re­mem­ber a thread with Yann Le Cun where he con­fi­dently dis­missed short timelines, com­ment af­ter com­ment.

My fol­low-up ques­tion would there­fore be:

“What is an im­por­tant part of your model you think top ML re­searchers (such as Le Cun) are miss­ing?”