The phone seems to be off the hook for most of public on AI danger, perhaps a symptom of burnout from numerous other scientific Millenialist scares—people have been hearing of imminent dangers of catastrophe for decades that have failed to impact the lives of 95%+ of population in any significant way and now just write it all off as more of the same.
I am sure that most LW readers find little in the way of positive reception for our concerns amongst less technologically engaged family members and acquaintances. There are just too many comforting techno-utopian narratives that we are still having to compete with informed by the superficially positive representations in movies and TV, and and most people bias towards optimism in relatively good/comfortable times like these. We are dealing with emotional reactions and the sheeply ‘vibe’ of population rather than thoughtfully considered positions.
But I am pretty sure that will all change as soon as we see significant AI competition/undercutting for white collar professions. Those affected will quickly start baying for blood, and the electorally dominant empathic response to those sad stories of the economically impacted will rapidly swing the more emotively governed wider population against AI. OECD Democratic govts will inevitably then move to ban AI taking jobs of particularly politically protected classes of people (might still leave some niches vulnerable—like medicine where there is always a shortage of service supply causing ridiculously high prices, and perhaps tech for vengeful reasons). It will be Butlerian Jihad lite, aimed at symptoms rather than causes, and will likely buy us a few years of relative normalcy as more dangerous ASI is developed in govt approved labs and by despotic regimes.
I doubt it will save us in 50 year time frame, but will perhaps make the economic disruption less severe for 5-10 years.
The way to have a bigger impact in the shorter term would be to buy AI-danger editorial support from influencers, particularly those with young female audiences that form the core of environmental and other popular protest movements. They are by temperament the easiest to bring on board, and have outsized poltical influence.
I think the support/belief of “AI bad” is widespread, but people don’t have a clear goal to rally behind. People want to support something, but give a resigned “what am I to do?”
If there’s a strong cause with a clear chance of helping (i.e a “don’t build AI or advance computer semiconductors for the next 50 years” guild) people will rally behind it.
The phone seems to be off the hook for most of public on AI danger, perhaps a symptom of burnout from numerous other scientific Millenialist scares—people have been hearing of imminent dangers of catastrophe for decades that have failed to impact the lives of 95%+ of population in any significant way and now just write it all off as more of the same.
I am sure that most LW readers find little in the way of positive reception for our concerns amongst less technologically engaged family members and acquaintances. There are just too many comforting techno-utopian narratives that we are still having to compete with informed by the superficially positive representations in movies and TV, and and most people bias towards optimism in relatively good/comfortable times like these. We are dealing with emotional reactions and the sheeply ‘vibe’ of population rather than thoughtfully considered positions.
But I am pretty sure that will all change as soon as we see significant AI competition/undercutting for white collar professions. Those affected will quickly start baying for blood, and the electorally dominant empathic response to those sad stories of the economically impacted will rapidly swing the more emotively governed wider population against AI. OECD Democratic govts will inevitably then move to ban AI taking jobs of particularly politically protected classes of people (might still leave some niches vulnerable—like medicine where there is always a shortage of service supply causing ridiculously high prices, and perhaps tech for vengeful reasons). It will be Butlerian Jihad lite, aimed at symptoms rather than causes, and will likely buy us a few years of relative normalcy as more dangerous ASI is developed in govt approved labs and by despotic regimes.
I doubt it will save us in 50 year time frame, but will perhaps make the economic disruption less severe for 5-10 years.
The way to have a bigger impact in the shorter term would be to buy AI-danger editorial support from influencers, particularly those with young female audiences that form the core of environmental and other popular protest movements. They are by temperament the easiest to bring on board, and have outsized poltical influence.
I think the support/belief of “AI bad” is widespread, but people don’t have a clear goal to rally behind. People want to support something, but give a resigned “what am I to do?”
If there’s a strong cause with a clear chance of helping (i.e a “don’t build AI or advance computer semiconductors for the next 50 years” guild) people will rally behind it.