Being able to take future AI seriously as a risk seems to be highly correlated to being able to take COVID seriously as a risk in February 2000.
The key skill here may be as simple as being able to selectively turn off normalcy bias in the face of highly unusual news.
A closely related “skill” may be a certain general pessimism about future events, the sort of thing economists jokingly describe as “correctly predicting 12 out of the last 6 recessions.”
That said, mass public action can be valuable. It’s a notoriously blunt tool, though. As one person put it, “if you want to coordinate more than 5,000 people, your message can be about 5 words long.” And the public will act anyway, in some direction. So if there’s something you want to public to do, it can be worth organizing and working on communication strategies.
My political platform is, if you boil it down far enough, about 3 words long: “Don’t build SkyNet.” (As early as the late 90s, I joked about having a personal 11th commandment: “Thou shalt not build SkyNet.” One of my career options at that point was to potentially work on early semi-autonomous robotic weapon platform prototypes, so this was actually relevant moral advice.)
But I strongly suspect that once the public believes that companies might truly build SkyNet, their reaction will be “What the actual fuck?” and widespread public backlash. I expect lesser but still serious public backlash if AI agents ever advance beyond the current “clever white-collar intern” level of competence and start taking over jobs en masse.
The main limits of public action are that (1) public action is a blunt tool, and (2) the public needs to actually believe in an imminent risk. Right now AI risk mostly gets filed under “I hate AI slop” and “it’s a fun hypothetical bull session, with little impact on my life.” Once people actually start to take AI seriously, you will often see strong negative attitudes even from non-technical people.
Of course, public majorities of 60-80% of the population want lots of things that the US political system doesn’t give them. So organizing the public isn’t sufficient by itself, especially if your timelines are short. But if you assume a significant chance that timelines are closer to (say) 2035 than 2027, then some kinds of public outreach might be valuable, especially if the public starts to believe. This can create significant pressure on legislative coalitions and executive leadership. But it’s all pretty hit-or-miss. Luck would play a major role.
Being able to take future AI seriously as a risk seems to be highly correlated to being able to take COVID seriously as a risk in February 2000.
The key skill here may be as simple as being able to selectively turn off normalcy bias in the face of highly unusual news.
A closely related “skill” may be a certain general pessimism about future events, the sort of thing economists jokingly describe as “correctly predicting 12 out of the last 6 recessions.”
That said, mass public action can be valuable. It’s a notoriously blunt tool, though. As one person put it, “if you want to coordinate more than 5,000 people, your message can be about 5 words long.” And the public will act anyway, in some direction. So if there’s something you want to public to do, it can be worth organizing and working on communication strategies.
My political platform is, if you boil it down far enough, about 3 words long: “Don’t build SkyNet.” (As early as the late 90s, I joked about having a personal 11th commandment: “Thou shalt not build SkyNet.” One of my career options at that point was to potentially work on early semi-autonomous robotic weapon platform prototypes, so this was actually relevant moral advice.)
But I strongly suspect that once the public believes that companies might truly build SkyNet, their reaction will be “What the actual fuck?” and widespread public backlash. I expect lesser but still serious public backlash if AI agents ever advance beyond the current “clever white-collar intern” level of competence and start taking over jobs en masse.
The main limits of public action are that (1) public action is a blunt tool, and (2) the public needs to actually believe in an imminent risk. Right now AI risk mostly gets filed under “I hate AI slop” and “it’s a fun hypothetical bull session, with little impact on my life.” Once people actually start to take AI seriously, you will often see strong negative attitudes even from non-technical people.
Of course, public majorities of 60-80% of the population want lots of things that the US political system doesn’t give them. So organizing the public isn’t sufficient by itself, especially if your timelines are short. But if you assume a significant chance that timelines are closer to (say) 2035 than 2027, then some kinds of public outreach might be valuable, especially if the public starts to believe. This can create significant pressure on legislative coalitions and executive leadership. But it’s all pretty hit-or-miss. Luck would play a major role.