I’m going to quote this from an EA Forum post I just made for why simply repeated exposure to AI Safety (through eg media coverage) will probably do a lot to persuade people:
[T]he more people hear about AI Safety, the more seriously people will take the issue. This seems to be true even if the coverage is purporting to debunk the issue (which as I will discuss later I think will be fairly rare) - a phenomenon called the illusory truth effect. I also think this effect will be especially strong for AI Safety. Right now, in EA-adjacent circles, the argument over AI Safety is mostly a war of vibes. There is very little object-level discussion—it’s all just “these people are relying way too much on their obsession with tech/rationality” or “oh my god these really smart people think the world could end within my lifetime”. The way we (AI Safety) win this war of vibes, which will hopefully bleed out beyond the EA-adjacent sphere, is just by giving people more exposure to our side.
This will definitely help. But any kind dirty tricks could easily deepen the polarization with those opposed. On thinking about it more, I think this polarization is already in play. Interested intellectuals have already seen years of forceful AI doom arguments, and they dislike the whole concept on an emotional level. Similarly, those dismissals drive AGI x-risk believers (including myself) kind of nuts, and we tend to respond more forcefully, and the cycle continues.
The problem with this is that, if the public perceives AGI as dangerous, but most of those actually working in the field do not, policy will tend to follow the experts and ignore the populace. They’ll put in surface-level rules that sound like they’ll do something to monitor AGI work, without actually doing much. At least that’s my take on much of public policy that responds to public outcry.
I’m going to quote this from an EA Forum post I just made for why simply repeated exposure to AI Safety (through eg media coverage) will probably do a lot to persuade people:
This will definitely help. But any kind dirty tricks could easily deepen the polarization with those opposed. On thinking about it more, I think this polarization is already in play. Interested intellectuals have already seen years of forceful AI doom arguments, and they dislike the whole concept on an emotional level. Similarly, those dismissals drive AGI x-risk believers (including myself) kind of nuts, and we tend to respond more forcefully, and the cycle continues.
The problem with this is that, if the public perceives AGI as dangerous, but most of those actually working in the field do not, policy will tend to follow the experts and ignore the populace. They’ll put in surface-level rules that sound like they’ll do something to monitor AGI work, without actually doing much. At least that’s my take on much of public policy that responds to public outcry.