I think a good counter to this from the activism perspective is avoiding labels and producing objective, thoughtful, and well-reasoned content arguing your point. Anti-AI-safety content often focuses on attacking the people or the specific beliefs of the people in the AI safety/rationalist community. The epistemic effects of these attacks can be circumvented by avoiding association with that community as much as is reasonable, without being deceptive. A good example would be the YouTube channel AI in Context run by 80,000 Hours. They made an excellent AI 2027 video, coming at it from an objective perspective and effectively connecting the dots from the seemingly fantastical scenario to reality. That video is now approaching 10 million views on a completely fresh channel! See also SciShows recent episode on AI, which also garnered extremely positive reception.
The strong viewership on this type of content demonstrates that people are clearly receptive to the AI safety narrative if it’s done tastefully and logically. Most of the negative comments on these videos (anecdotally) come from people who believe that superintelligent AI is either impossible or extremely distant, not that reject the premise altogether. In my view, content like this would be affected very weakly by the type of attacks you are talking about in this post. To be blunt, to oversimplify, and to take the risk of being overconfident, I believe safety and caution narratives have the advantage over acceleration narratives by merit of being based in reality and logic! Imagine attempting to make a “counter” to the above videos trying to make the case that safety is no big deal. How would you even go about that? Would people believe you? Arguments are not won by truth alone, but it certainly helps.
The potential political impact seems more salient, but in my (extremely inexpert) opinion getting the public on your side will cause political figures to follow. The measures required to meaningfully impact AI outcomes require so much political will that extremely strong public opinion is required, and that extremely strong public opinion comes from a combination of real world impact and evidence(“AI took my job”) along with properly communicating the potential future and dangers (Like the content above). The more the public is on the side of an AI slowdown, the less impact a super PAC can have on politicians decisions regarding the topic (compare a world where 2 percent of voters say they support a pause on AI development to a world where 70 percent say they support it. In world 1 a politician would be easily swayed to avoid the issue by the threat of adversarial spending, but in world 2 the political risk of avoiding the issue is far stronger than the risk of invoking the wrath of the super PAC). This is not meant to diminish the very real harm that organized opposition can cause politically, or to downplay the importance of countering that political maneuvering in turn. Political work is extremely important, and especially so if well funded groups are working to push the exact opposite narrative to what is needed.
I don’t mean to diminish the potential harm this kind of political maneuvering can have, but in my view the future is bright from the safety activism perspective. I’ll also add that I don’t believe my view of “avoid labels” and your point about “standing proud and putting up a fight” are opposed. Both can happen parallelly, two fights at once. I strongly agree that backing down from your views or actions as a result of bad press is a mistake, and I don’t advocate for that here.
I think a good counter to this from the activism perspective is avoiding labels and producing objective, thoughtful, and well-reasoned content arguing your point. Anti-AI-safety content often focuses on attacking the people or the specific beliefs of the people in the AI safety/rationalist community. The epistemic effects of these attacks can be circumvented by avoiding association with that community as much as is reasonable, without being deceptive. A good example would be the YouTube channel AI in Context run by 80,000 Hours. They made an excellent AI 2027 video, coming at it from an objective perspective and effectively connecting the dots from the seemingly fantastical scenario to reality. That video is now approaching 10 million views on a completely fresh channel! See also SciShows recent episode on AI, which also garnered extremely positive reception.
The strong viewership on this type of content demonstrates that people are clearly receptive to the AI safety narrative if it’s done tastefully and logically. Most of the negative comments on these videos (anecdotally) come from people who believe that superintelligent AI is either impossible or extremely distant, not that reject the premise altogether. In my view, content like this would be affected very weakly by the type of attacks you are talking about in this post. To be blunt, to oversimplify, and to take the risk of being overconfident, I believe safety and caution narratives have the advantage over acceleration narratives by merit of being based in reality and logic! Imagine attempting to make a “counter” to the above videos trying to make the case that safety is no big deal. How would you even go about that? Would people believe you? Arguments are not won by truth alone, but it certainly helps.
The potential political impact seems more salient, but in my (extremely inexpert) opinion getting the public on your side will cause political figures to follow. The measures required to meaningfully impact AI outcomes require so much political will that extremely strong public opinion is required, and that extremely strong public opinion comes from a combination of real world impact and evidence(“AI took my job”) along with properly communicating the potential future and dangers (Like the content above). The more the public is on the side of an AI slowdown, the less impact a super PAC can have on politicians decisions regarding the topic (compare a world where 2 percent of voters say they support a pause on AI development to a world where 70 percent say they support it. In world 1 a politician would be easily swayed to avoid the issue by the threat of adversarial spending, but in world 2 the political risk of avoiding the issue is far stronger than the risk of invoking the wrath of the super PAC). This is not meant to diminish the very real harm that organized opposition can cause politically, or to downplay the importance of countering that political maneuvering in turn. Political work is extremely important, and especially so if well funded groups are working to push the exact opposite narrative to what is needed.
I don’t mean to diminish the potential harm this kind of political maneuvering can have, but in my view the future is bright from the safety activism perspective. I’ll also add that I don’t believe my view of “avoid labels” and your point about “standing proud and putting up a fight” are opposed. Both can happen parallelly, two fights at once. I strongly agree that backing down from your views or actions as a result of bad press is a mistake, and I don’t advocate for that here.