When AI’s smarter than we are, we have to be sure it shares our values: after all, we won’t understand how it thinks. (policymakers, tech executives)
Peter Berggren
Building advanced AI without knowing how to align it is like launching a rocket without knowing how gravity works. (tech executives, ML researchers)
(based on MIRI’s “The Rocket Alignment Problem”)
AI could be the best or worst thing to ever happen to humanity. It all depends on how we design and program them. (policymakers, tech executives)
One of my one-liners was written by an AI. You probably can’t tell which one. Imagine what AI could do once it’s even more powerful. (tech executives, ML researchers)
If we can’t keep advanced AI safe for us, it will keep itself safe from us… and we probably won’t be happy with the result. (policymakers)
People said man wouldn’t fly for a million years. Airplanes were fighting each other eleven years later. Superintelligent AI might happen faster than you think. (policymakers, tech executives) (source) (other source)
If the media reported on other dangers like it reported on AI risk, it would talk about issues very differently. It would compare events in the Middle East to Tom Clancy novels. It would dismiss runaway climate change by saying it hasn’t happened yet. It would think of the risk of nuclear war in terms of putting people out of work. It would compare asteroid impacts to mudslides. It would call meteorologists “nerds” for talking about hurricanes.
AI risk is serious, and it isn’t taken seriously. It’s time to look past the sound bites and focus on what experts are really saying.
(policymakers, tech executives)
(based on Scott Alexander’s post on the subject)
Hoping you’ll run out of gas before you drive off a cliff is a losing strategy. Align AGI; don’t count on long timelines.
(ML researchers)
(adapted from Human Compatible)
Just because tech billionaires care about AI risk doesn’t mean you shouldn’t. Even if a fool says the sky is blue, it’s still blue.
(policymakers, maybe ML researchers)
Betting against the people who said pandemics were a big deal, six years ago, is a losing proposition.
(policymakers, tech executives)
(source)
You might think that AI risk is no big deal. But would you bet your life on it?
(policymakers)
There is a certain strain of thinker who insists on being more naturalist than Nature. They will say with great certainty that since Thor does not exist, Mr. Tesla must not exist either, and that the stories of Asclepius disprove Pasteur. This is quite backwards: it is reasonable to argue that a machine will never think because the Mechanical Turk couldn’t; it is madness to say it will never think because Frankenstein’s monster could. As well demand that we must deny Queen Victoria lest we accept Queen Mab, or doubt Jack London lest we admit Jack Frost. Nature has never been especially interested in looking naturalistic, and it ignores these people entirely and does exactly what it wants.
(adapted from Scott Alexander’s G. K. Chesterton on AI Risk)
(policymakers, tech executives)
I thought that would ruin the parallelism and flow a bit, and this isn’t intended for the “paragraph” category, so I didn’t put that in yet.
This seems like it falls into the trap of being “too weird” for policymakers to take seriously. Good concept; maybe work on the execution a bit?
Fixed; thanks!
“Follow the science” doesn’t just apply to pandemics. It’s time to listen to AI experts, not AI pundits.
(policymakers)
AI might be nowhere near human-level yet. We’re also nowhere near runaway climate change, but we still care about it.
(policymakers, tech executives)
Climate change was weird in the 1980s. Pandemics were weird in the 2010s. Every world problem is weird… until it happens.
(policymakers)
Clarke’s First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Stuart Russell is only 60. But what he lacks in age, he makes up in distinction: he’s a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. His book Human Compatible states that superintelligent AI is possible; Clarke would recommend we listen.
(tech executives, ML researchers)
(adapted slightly from the first paragraphs of the Slate Star Codex review of Human Compatible)
Social media AI has already messed up politics, and AI’s only getting more powerful from here on out. (policymakers)