The fact that Future Now ran the story implies that the masses are, in fact, capable of understanding the arguments for AI risk, so long as those arguments come from someone who sounds vaguely like an authority figure.
The masses are capable of understanding AI risk even without that. It’s really not hard to understand; the basic premise is the subject of dozens of movies and books made by people way dumber than Eliezer Yudkowsky. If you went to any twitter thread after the LaMDA sentience story broke, you could see half of the people in the comments half-joking about how this is just like The Terminator and they need to shut it off right now.
Maybe they’re using the “wrong” arguments, and they certainly don’t have a model of the problem detailed enough to really deal with it, but a sizable amount of people have at least some kind of model (maybe a xenophobia heuristic?) that lets them come to the right conclusion anyways. They just never really think to do anything about it because 99% of the public believes artificial intelligence is 80 years away.
Totally agree; but the very basics (if we make something smarter than us and don’t give it the right objective it’ll kill us), are parse-able to what seems like a large fraction of the general population. I’m not saying that they wouldn’t change their minds if their favorite politician gave a “debunking video”, however the seeds are at least there.
I was surprised by this tweet and so I looked it up. I read a bit further and ran into this; I guess I’m kind of surprised to see a concern as fundamental as alignment, whether or not you agree it is an major issue, be so… is polarizing the right word? Is this an issue we can expect to see grow as AI safety (hopefully) becomes more mainstream? “LW extended cinematic universe” culture getting an increasingly bad reputation seems like it would be extremely devastating for alignment goals in general.
Reputation is a vector not a scaler. A certain subsection of the internet produces snarky drivel. This includes creationists creating starky drivel against evolution, and probably some evolutionists creating snarky drivel against creationists.
Why are they producing snarky drivel about AI now? Because the ideas have finally trickled down to them.
Meanwhile, the more rational people ignore the snarky drivel.
The masses are capable of understanding AI risk even without that. It’s really not hard to understand; the basic premise is the subject of dozens of movies and books made by people way dumber than Eliezer Yudkowsky. If you went to any twitter thread after the LaMDA sentience story broke, you could see half of the people in the comments half-joking about how this is just like The Terminator and they need to shut it off right now.
Maybe they’re using the “wrong” arguments, and they certainly don’t have a model of the problem detailed enough to really deal with it, but a sizable amount of people have at least some kind of model (maybe a xenophobia heuristic?) that lets them come to the right conclusion anyways. They just never really think to do anything about it because 99% of the public believes artificial intelligence is 80 years away.
I’ve seen a lot of countersentiment to the idea of AI Safety, though:
(I have a collection of ~20 of these, which I’ll probably make a top-level post.)
Totally agree; but the very basics (if we make something smarter than us and don’t give it the right objective it’ll kill us), are parse-able to what seems like a large fraction of the general population. I’m not saying that they wouldn’t change their minds if their favorite politician gave a “debunking video”, however the seeds are at least there.
I was surprised by this tweet and so I looked it up. I read a bit further and ran into this; I guess I’m kind of surprised to see a concern as fundamental as alignment, whether or not you agree it is an major issue, be so… is polarizing the right word? Is this an issue we can expect to see grow as AI safety (hopefully) becomes more mainstream? “LW extended cinematic universe” culture getting an increasingly bad reputation seems like it would be extremely devastating for alignment goals in general.
Reputation is a vector not a scaler. A certain subsection of the internet produces snarky drivel. This includes creationists creating starky drivel against evolution, and probably some evolutionists creating snarky drivel against creationists.
Why are they producing snarky drivel about AI now? Because the ideas have finally trickled down to them.
Meanwhile, the more rational people ignore the snarky drivel.
Cool, but the less rational people’s opinions are influential, so it’s important to mitigate their effect.