AI Risk, as Seen on Snapchat

The news tiles on Snapchat are utter drivel. Looking at my phone, today’s headlines are: “Zendaya Pregnancy Rumors!”, “Why Hollywood Won’t Cast Lucas Cruikshank”, and “The Most Dangerous Hood in Puerto Rico”. Essentially, Snapchat news is the Gen-Z equivalent of the tabloid section at a Walmart checkout aisle.

Which is why I was so surprised to hear it tell me the arguments for AI risk.

The story isn’t exactly epistemically rigorous. However, it reiterates points I’ve heard on this site, including recursive self-improvement, FOOMing, and the hidden complexity of wishes. Here’s an excerpt of the story, transcribed by me:

Given enough time and the ability to self-generate improved versions of itself, it wouldn’t take long for a fully autonomous general AI to achieve superintelligence, a level of cognitive processing power so many times stronger than our own that its abilities would appear godlike to us, and the pace at which it could improve would be lightning fast.

Superintelligent AI could think through problems in a few hours equivalent to what the world’s smartest people could do in a thousand years. How could we possibly control or out-think such a powerful intellect? [...]

Being the first superintelligence, the AI could see any competition as a threat, and could choose to remove [all humans]. [...]

When in the possession of such superintelligence, scientists may pose it questions, like how to solve previously-impossible mathematical equations. To solve the problem, the AI may choose to forcibly convert all matter on Earth into a supercomputer to handle the processing of the equation. It could solve the problem while killing the species that asked the question.

We could also make the mistake of giving the AI too vague a problem to solve, such as “end human suffering”. It could then decide the best way to do this is to eliminate all humans, and therefore end their suffering.

Why does this matter? The channel, Future Now, has 110,000 subscribers(!), and the story was likely seen by many others. This is the first time I’ve ever seen anyone seriously attempt to explain AI risk to the masses.

The fact that Future Now ran the story implies that the masses are, in fact, capable of understanding the arguments for AI risk, so long as those arguments come from someone who sounds vaguely like an authority figure.