Musk on AGI Timeframes

Elon Musk sub­mit­ted a com­ment to edge.org a day or so ago, on this ar­ti­cle. It was later re­moved.

The pace of progress in ar­tifi­cial in­tel­li­gence (I’m not refer­ring to nar­row AI) is in­cred­ibly fast. Un­less you have di­rect ex­po­sure to groups like Deep­mind, you have no idea how fast-it is grow­ing at a pace close to ex­po­nen­tial. The risk of some­thing se­ri­ously dan­ger­ous hap­pen­ing is in the five year timeframe. 10 years at most. This is not a case of cry­ing wolf about some­thing I don’t un­der­stand.

I am not alone in think­ing we should be wor­ried. The lead­ing AI com­pa­nies have taken great steps to en­sure safety. The rec­og­nize the dan­ger, but be­lieve that they can shape and con­trol the digi­tal su­per­in­tel­li­gences and pre­vent bad ones from es­cap­ing into the In­ter­net. That re­mains to be seen...


Now Elon has been mak­ing noises about AI safety lately in gen­eral, in­clud­ing for ex­am­ple men­tion­ing Bostrom’s Su­per­in­tel­li­gence on twit­ter. But this is the first time that I know of that he’s come up with his own pre­dic­tions of the timeframes in­volved, and I think his are rather quite soon com­pared to most.

The risk of some­thing se­ri­ously dan­ger­ous hap­pen­ing is in the five year timeframe. 10 years at most.

We can com­pare this to MIRI’s post in May this year, When Will AI Be Created, which illus­trates that it seems rea­son­able to think of AI as be­ing fur­ther away, but also that there is a lot of un­cer­tainty on the is­sue.

Of course, “some­thing se­ri­ously dan­ger­ous” might not re­fer to full blown su­per­in­tel­li­gent uFAI—there’s plenty of space for dis­asters of mag­ni­tude in be­tween the range of the 2010 flash crash and clippy turn­ing the uni­verse into pa­per­clips to oc­cur.

In any case, it’s true that Musk has more “di­rect ex­po­sure” to those on the fron­tier of AGI re­search than your av­er­age per­son, and it’s also true that he has an au­di­ence, so I think there is some in­ter­est to be found in his com­ments here.