A high-quality American public survey on AI, Artificial Intelligence Use Prompts Concerns, was released yesterday by Monmouth. Some notable results:
9% say AI1 would do more good than harm vs 41% more harm than good (similar to responses to a similar survey in 2015)
55% say AI could eventually pose an existential threat (up from 44% in 2015)
55% favor “having a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices”
60% say they have “heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans”
Worries about safety and support of regulation echoes other surveys:
71% of Americans agree that there should be national regulations on AI (Morning Consult 2017)
The public is concerned about some AI policy issues, especially privacy, surveillance, and cyberattacks (GovAI 2019)
The public is concerned about various negative consequences of AI, including loss of privacy, misuse, and loss of jobs (Stevens / Morning Consult 2021)
Surveys match the anecdotal evidence from talking to Uber drivers: Americans are worried about AI safety and would support regulation on AI. Perhaps there is an opportunity to improve the public’s beliefs, attitudes, and memes and frames for making sense of AI; perhaps better public opinion would enable better policy responses to AI or actions from AI labs or researchers.
Public desire for safety and regulation is far from sufficient for a good government response to AI. But it does mean that the main challenge for improving government response is helping relevant actors believe what’s true, developing good affordances for them, and helping them take good actions— not making people care enough about AI to act at all.