The kind of movement that would be most effective at stopping AI, would be one that is anti-AI with no nuances and certainly no transhumanism. Just emphasize every downside of AI, actual and possible, with human extinction being front and center as the really big reason not to do it.
Yeah. As an example of “no nuances”, maybe an effective anti-AI movement would even have to be anti-AI-alignment. As in, it would tell young people “don’t work on AI alignment”.
One movement is probably the wrong idea, rather we need different movements tailored to working with each relevant social systems and class, and such that they will work well with each other.
Many people dislike AI for mundane reasons, and it seems like thrusts to address the “if anyone builds it, everyone dies (IABIED)” issue are often watered down to focus instead on mundane (but still important) issues in ways that do not address IABIED.
A wedge issue: narrow AI. I think narrow AI is very good and useful and we should redirect investment from AGI/ASI towards interdisciplinary narrow AI. This perspective is much more appealing for many pro-technology people than anti-AI everywhere, and I think that pro-technology people are an important kind of people to convince. But, for example, art generation is (mostly) a form of narrow AI, and it has (much like LLMs) been trained illegally on stolen intellectual property. I think that is a problem, but I do not oppose the kind of machines which would put artists out of work in general. So, an anti AGI, pro narrow AI stance is unlikely to be popular with artists, for example.
Unfortunately, I believe nuance is necessary, but the idea of having multiple movements focused on multiple issues seems worthwhile for negating some of the problems created by having nuance.
The kind of movement that would be most effective at stopping AI, would be one that is anti-AI with no nuances and certainly no transhumanism. Just emphasize every downside of AI, actual and possible, with human extinction being front and center as the really big reason not to do it.
Yeah. As an example of “no nuances”, maybe an effective anti-AI movement would even have to be anti-AI-alignment. As in, it would tell young people “don’t work on AI alignment”.
One movement is probably the wrong idea, rather we need different movements tailored to working with each relevant social systems and class, and such that they will work well with each other.
Many people dislike AI for mundane reasons, and it seems like thrusts to address the “if anyone builds it, everyone dies (IABIED)” issue are often watered down to focus instead on mundane (but still important) issues in ways that do not address IABIED.
A wedge issue: narrow AI. I think narrow AI is very good and useful and we should redirect investment from AGI/ASI towards interdisciplinary narrow AI. This perspective is much more appealing for many pro-technology people than anti-AI everywhere, and I think that pro-technology people are an important kind of people to convince. But, for example, art generation is (mostly) a form of narrow AI, and it has (much like LLMs) been trained illegally on stolen intellectual property. I think that is a problem, but I do not oppose the kind of machines which would put artists out of work in general. So, an anti AGI, pro narrow AI stance is unlikely to be popular with artists, for example.
Unfortunately, I believe nuance is necessary, but the idea of having multiple movements focused on multiple issues seems worthwhile for negating some of the problems created by having nuance.