In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
Thanks, added.