[Draft for commenting] Near-Term AI risks predictions

“Predictions of the Near-Term Global Catastrophic Risks of Artificial Intelligence”

Abstract: In this article, we explore risks of the appearance of dangerous AI in the near (0–5 years) and medium term (5–15 years). Polls show that around 10 percent of the probability weight is given to early appearance of artificial general intelligence (AGI) in the next 15 years. Neural net performance and other characteristics, like the number of “neurons”, are doubling every year, and extrapolating this tendency suggests that roughly human-level performance will be reached in 4–6 years, around 2022–24. The performance of the hardware is accelerating, thanks to advances in graphic processing units and use of many chips in one processing unit, which have helped to overcome the limits of Moore’s law. Alternate extrapolations of the technological development produce similar results. AI will become dangerous when it reaches ability to solve the “computational complexity of omnicide”, or will be able to create self-improving AI. The appearance of near-human AI will strongly accelerate the speed of AI development, and as a result, some form of superintelligent AI may appear before 2030.

Highlights:
• Median timing of AI prediction is the wrong measure to use in AI risk assessment.
• Dangerous AI level is defined through AI’s ability to facilitate a global catastrophe and it could happen before AGI.
• The growth rate of hardware performance for AI applications has accelerated since 2016 and Moore’s law will provide enough computational power for AGI in near term.
• Main measures of neural nets performance have been doubling every one year in the last five years since 2012 and if this trend continues, will reach human level in 2022.
• Several independent methods predict near-human-level AI after 2022 and a “singularity” around 2030.

Full text open for commenting here: https://​​goo.gl/​​6DyTJG