AGI Predictions

This post is a collection of key questions that feed into AI timelines and AI safety work where it seems like there is substantial interest or disagreement amongst the LessWrong community.

You can make a prediction on a question by hovering over the widget and clicking. You can update your prediction by clicking at a new point, and remove your prediction by clicking on the same point. Try it out:

10%20%30%40%50%60%70%80%90%
Will more than 50 people predict on this post?

Add questions & operationalizations

This is not intended to be a comprehensive list, so I’d love for people to add their own questions – here are instructions on making your own embedded question. If you have better operationalizations of the questions, you can make your own version in the comments. If there’s general agreement on an alternative operationalization being better, I’ll add it into the post.

Questions

AGI definition

We’ll define AGI in this post as a unified system that, for almost all economically relevant cognitive tasks, at least matches any human’s ability at the task. This is similar to Rohin Shah and Ben Cottier’s definition in this post.

Safety Questions

10%20%30%40%50%60%70%80%90%
Will AGI cause an existential catastrophe?
10%20%30%40%50%60%70%80%90%
Will AGI cause an existential catastrophe without additional intervention from the existing AI Alignment research community?
10%20%30%40%50%60%70%80%90%
Will there be an arms race dynamic in the lead-up to AGI?
10%20%30%40%50%60%70%80%90%
Will a single AGI or AGI project achieve a decisive strategic advantage?
10%20%30%40%50%60%70%80%90%
Will > 50% of AGI researchers agree with safety concerns by 2030?
10%20%30%40%50%60%70%80%90%
Will there be a 4 year interval in which world GDP doubles before the first 1 year interval in which world GDP doubles?
10%20%30%40%50%60%70%80%90%
Will AGI cause existential catastrophe conditional on there being a 4 year period of doubling of world GDP before a 1 year period of doubling?
10%20%30%40%50%60%70%80%90%
Will AGI cause existential catastrophe conditional on there being a 1 year period of doubling of world GDP without there first being a 4 year period of doubling?

Timelines Questions

See Forecasting AI timelines, Ajeya Cotra’s OP AI timelines report, and Adam Gleave’s #AN80 comment, for more context on this breakdown. I haven’t tried to operationalize this too much, so feel free to be more specific in the comments.

The first three questions in this section are mutually exclusive — that is, the probabilities you assign to them should not sum to more than 100%.

10%20%30%40%50%60%70%80%90%
Will we get AGI from deep learning with small variations, without more insights on a similar level to deep learning?
10%20%30%40%50%60%70%80%90%
Will we get AGI from 1-3 more insights on a similar level to deep learning?
10%20%30%40%50%60%70%80%90%
Will we need > 3 breakthroughs on a similar level to deep learning to get AGI?
10%20%30%40%50%60%70%80%90%
Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling?
10%20%30%40%50%60%70%80%90%
Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because we are unable to continue scaling?
10%20%30%40%50%60%70%80%90%
Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because the increase in AI capabilities from scaling plateaus?

Non-technical factor questions

10%20%30%40%50%60%70%80%90%
Will we experience an existential catastrophe before we build AGI?
10%20%30%40%50%60%70%80%90%
Will there be another AI Winter (a period commonly referred to as such) before we develop AGI?

Operationalizations

Safety Questions

1. Will AGI cause an existential catastrophe?

  • Existential catastrophe is defined here according to Toby Ord’s definition in the Precipice: “An event that causes extinction or the destruction of humanity’s long-term potential”.

  • This assumes that everyone currently working on AI alignment continues to do so.

2. Will AGI cause an existential catastrophe without additional intervention from the AI Alignment research community?

  • Roughly, the AI Alignment research community includes people working at CHAI, MIRI, current safety teams at OpenAI and DeepMind, FHI, AI Impacts, and similar orgs, as well as independent researchers writing on the AI Alignment Forum.

  • “Without additional intervention” = everyone currently in this community stops working on anything directly intended to improve AI safety as of today, 11/​20/​2020. They may work on AI in a way that indirectly and incidentally improves AI safety, but only to the same degree as researchers outside of the AI alignment community are currently doing this.

4. Will there be an arms race dynamic in the lead-up to AGI?

  • An arms race dynamic is operationalized as: 2 years before superintelligent AGI is built, there are at least 2 companies/​projects/​countries at the cutting edge each within 2 years of each others’ technology who are competing and not collaborating.

5. Will a single AGI or AGI project achieve a decisive strategic advantage?

  • This question uses Bostrom’s definition of decisive strategic advantage: “A level of technological and other advantages sufficient to enable it to achieve complete world domination” (Bostrom 2014).

6. Will > 50% of AGI researchers agree with safety concerns by 2030?

  • “Agree with safety concerns” means: broadly understand the concerns of the safety community, and agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it (Rohin Shah’s operationalization from this post).

7. Will there be a 4 year interval in which world GDP growth doubles before the first 1 year interval in which world GDP growth doubles?

  • This is essentially Paul Christano’s operationalization of the rate of development of AI from his post on Takeoff speeds. I’ve used this specific operationalization rather than “slow vs fast” or “continuous vs discontinuous” due to the ambiguity in how people use these terms.

8. Will AGI cause existential catastrophe conditional on there being a 4 year period of doubling of world GDP growth before a 1 year period of doubling?

  • Uses the same definition of existential catastrophe as previous questions.

9. Will AGI cause existential catastrophe conditional on there being a 1 year period of doubling of world GDP growth without there first being a 4 year period of doubling?

  • For example, we go from current growth rates to doubling within a year.

  • Uses the same definition of existential catastrophe as previous questions.

Timelines Questions

9. Will we get AGI from deep learning with small variations, without more insights on a similar level to deep learning?

  • An example would be something like GPT-N + RL + scaling.

10. Will we get AGI from 1-3 more insights on a similar level to deep learning?

  • Self-explanatory.

11. Will we need > 3 breakthroughs on a similar level to deep learning to get AGI?

  • Self-explanatory.

12. Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling?

  • This includes: 1) We are unable to continue scaling, e.g. due to limitations on compute, dataset size, or model size, or 2) We can practically continue scaling but the increase in AI capabilities from scaling plateaus (see below).

13. Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because we are unable to continue scaling?

  • Self-explanatory.

14. Before reaching AGI, will we hit a point where we can no longer improve AI capabilities by scaling because the increase in AI capabilities from scaling plateaus?

  • Self-explanatory.

Non-technical factor questions

15. Will we experience an existential catastrophe before we build AGI?

  • Existential catastrophe is defined here according to Toby Ord’s definition in the Precipice: “An event that causes extinction or the destruction of humanity’s long-term potential”.

  • This does not include events that would slow the progress of AGI development but are not existential catastrophes.

16. Will there be another AI Winter (a period commonly referred to as such) before we develop AGI?

  • From Wikipedia: “In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.”

  • This question asks about whether people will *refer* to a period as an AI winter, for example, Wikipedia and similar sources refer to it as a third AI winter.

Additional resources

Big thanks to Ben Pace, Rohin Shah, Daniel Kokotajlo, Ethan Perez, and Andreas Stuhlmüller for providing really helpful feedback on this post, and suggesting many of the operationalizations.