AGI Predictions

This post is a col­lec­tion of key ques­tions that feed into AI timelines and AI safety work where it seems like there is sub­stan­tial in­ter­est or dis­agree­ment amongst the LessWrong com­mu­nity.

You can make a pre­dic­tion on a ques­tion by hov­er­ing over the wid­get and click­ing. You can up­date your pre­dic­tion by click­ing at a new point, and re­move your pre­dic­tion by click­ing on the same point. Try it out:

Add ques­tions & operationalizations

This is not in­tended to be a com­pre­hen­sive list, so I’d love for peo­ple to add their own ques­tions – here are in­struc­tions on mak­ing your own em­bed­ded ques­tion. If you have bet­ter op­er­a­tional­iza­tions of the ques­tions, you can make your own ver­sion in the com­ments. If there’s gen­eral agree­ment on an al­ter­na­tive op­er­a­tional­iza­tion be­ing bet­ter, I’ll add it into the post.

Questions

AGI definition

We’ll define AGI in this post as a unified sys­tem that, for al­most all eco­nom­i­cally rele­vant cog­ni­tive tasks, at least matches any hu­man’s abil­ity at the task. This is similar to Ro­hin Shah and Ben Cot­tier’s defi­ni­tion in this post.

Safety Questions

Timelines Questions

See Fore­cast­ing AI timelines, Ajeya Co­tra’s OP AI timelines re­port, and Adam Gleave’s #AN80 com­ment, for more con­text on this break­down. I haven’t tried to op­er­a­tional­ize this too much, so feel free to be more spe­cific in the com­ments.

The first three ques­tions in this sec­tion are mu­tu­ally ex­clu­sive — that is, the prob­a­bil­ities you as­sign to them should not sum to more than 100%.

Non-tech­ni­cal fac­tor questions

Operationalizations

Safety Questions

1. Will AGI cause an ex­is­ten­tial catas­tro­phe?

  • Ex­is­ten­tial catas­tro­phe is defined here ac­cord­ing to Toby Ord’s defi­ni­tion in the Precipice: “An event that causes ex­tinc­tion or the de­struc­tion of hu­man­ity’s long-term po­ten­tial”.

  • This as­sumes that ev­ery­one cur­rently work­ing on AI al­ign­ment con­tinues to do so.

2. Will AGI cause an ex­is­ten­tial catas­tro­phe with­out ad­di­tional in­ter­ven­tion from the AI Align­ment re­search com­mu­nity?

  • Roughly, the AI Align­ment re­search com­mu­nity in­cludes peo­ple work­ing at CHAI, MIRI, cur­rent safety teams at OpenAI and Deep­Mind, FHI, AI Im­pacts, and similar orgs, as well as in­de­pen­dent re­searchers writ­ing on the AI Align­ment Fo­rum.

  • “Without ad­di­tional in­ter­ven­tion” = ev­ery­one cur­rently in this com­mu­nity stops work­ing on any­thing di­rectly in­tended to im­prove AI safety as of to­day, 11/​20/​2020. They may work on AI in a way that in­di­rectly and in­ci­den­tally im­proves AI safety, but only to the same de­gree as re­searchers out­side of the AI al­ign­ment com­mu­nity are cur­rently do­ing this.

4. Will there be an arms race dy­namic in the lead-up to AGI?

  • An arms race dy­namic is op­er­a­tional­ized as: 2 years be­fore su­per­in­tel­li­gent AGI is built, there are at least 2 com­pa­nies/​pro­jects/​coun­tries at the cut­ting edge each within 2 years of each oth­ers’ tech­nol­ogy who are com­pet­ing and not col­lab­o­rat­ing.

5. Will a sin­gle AGI or AGI pro­ject achieve a de­ci­sive strate­gic ad­van­tage?

  • This ques­tion uses Bostrom’s defi­ni­tion of de­ci­sive strate­gic ad­van­tage: “A level of tech­nolog­i­cal and other ad­van­tages suffi­cient to en­able it to achieve com­plete world dom­i­na­tion” (Bostrom 2014).

6. Will > 50% of AGI re­searchers agree with safety con­cerns by 2030?

  • “Agree with safety con­cerns” means: broadly un­der­stand the con­cerns of the safety com­mu­nity, and agree that there is at least one con­cern such that we have not yet solved it and we should not build su­per­in­tel­li­gent AGI un­til we do solve it (Ro­hin Shah’s op­er­a­tional­iza­tion from this post).

7. Will there be a 4 year in­ter­val in which world GDP growth dou­bles be­fore the first 1 year in­ter­val in which world GDP growth dou­bles?

  • This is es­sen­tially Paul Chris­tano’s op­er­a­tional­iza­tion of the rate of de­vel­op­ment of AI from his post on Take­off speeds. I’ve used this spe­cific op­er­a­tional­iza­tion rather than “slow vs fast” or “con­tin­u­ous vs dis­con­tin­u­ous” due to the am­bi­guity in how peo­ple use these terms.

8. Will AGI cause ex­is­ten­tial catas­tro­phe con­di­tional on there be­ing a 4 year pe­riod of dou­bling of world GDP growth be­fore a 1 year pe­riod of dou­bling?

  • Uses the same defi­ni­tion of ex­is­ten­tial catas­tro­phe as pre­vi­ous ques­tions.

9. Will AGI cause ex­is­ten­tial catas­tro­phe con­di­tional on there be­ing a 1 year pe­riod of dou­bling of world GDP growth with­out there first be­ing a 4 year pe­riod of dou­bling?

  • For ex­am­ple, we go from cur­rent growth rates to dou­bling within a year.

  • Uses the same defi­ni­tion of ex­is­ten­tial catas­tro­phe as pre­vi­ous ques­tions.

Timelines Questions

9. Will we get AGI from deep learn­ing with small vari­a­tions, with­out more in­sights on a similar level to deep learn­ing?

  • An ex­am­ple would be some­thing like GPT-N + RL + scal­ing.

10. Will we get AGI from 1-3 more in­sights on a similar level to deep learn­ing?

  • Self-ex­plana­tory.

11. Will we need > 3 break­throughs on a similar level to deep learn­ing to get AGI?

  • Self-ex­plana­tory.

12. Be­fore reach­ing AGI, will we hit a point where we can no longer im­prove AI ca­pa­bil­ities by scal­ing?

  • This in­cludes: 1) We are un­able to con­tinue scal­ing, e.g. due to limi­ta­tions on com­pute, dataset size, or model size, or 2) We can prac­ti­cally con­tinue scal­ing but the in­crease in AI ca­pa­bil­ities from scal­ing plateaus (see be­low).

13. Be­fore reach­ing AGI, will we hit a point where we can no longer im­prove AI ca­pa­bil­ities by scal­ing be­cause we are un­able to con­tinue scal­ing?

  • Self-ex­plana­tory.

14. Be­fore reach­ing AGI, will we hit a point where we can no longer im­prove AI ca­pa­bil­ities by scal­ing be­cause the in­crease in AI ca­pa­bil­ities from scal­ing plateaus?

  • Self-ex­plana­tory.

Non-tech­ni­cal fac­tor questions

15. Will we ex­pe­rience an ex­is­ten­tial catas­tro­phe be­fore we build AGI?

  • Ex­is­ten­tial catas­tro­phe is defined here ac­cord­ing to Toby Ord’s defi­ni­tion in the Precipice: “An event that causes ex­tinc­tion or the de­struc­tion of hu­man­ity’s long-term po­ten­tial”.

  • This does not in­clude events that would slow the progress of AGI de­vel­op­ment but are not ex­is­ten­tial catas­tro­phes.

16. Will there be an­other AI Win­ter (a pe­riod com­monly referred to as such) be­fore we de­velop AGI?

  • From Wikipe­dia: “In the his­tory of ar­tifi­cial in­tel­li­gence, an AI win­ter is a pe­riod of re­duced fund­ing and in­ter­est in ar­tifi­cial in­tel­li­gence re­search.”

  • This ques­tion asks about whether peo­ple will *re­fer* to a pe­riod as an AI win­ter, for ex­am­ple, Wikipe­dia and similar sources re­fer to it as a third AI win­ter.

Ad­di­tional resources

Big thanks to Ben Pace, Ro­hin Shah, Daniel Koko­ta­jlo, Ethan Perez, and An­dreas Stuh­lmüller for pro­vid­ing re­ally helpful feed­back on this post, and sug­gest­ing many of the op­er­a­tional­iza­tions.