A lot of people are worried about AI. What are their worries? How worried are they? Are some demographics more worried than others? We ran a study to find out.
In this article, we explain 16 concerns about AI that you might find it valuable to know about. We discuss, based on our data (collected in October 2025), how worried people in the US are about each concern.
To whet your appetite, here are some questions that our study offers insights into. Can you predict what we found before we tell you the answers?
Are conservatives more, less, or equally likely to be concerned about AI than progressives?
What about gender—are men or women more likely to be concerned?
Does AI-related knowledge affect how concerned people are?
What are people most concerned about when it comes to AI?
How low or high is the general level of concern about AI in the US population?
Have you made your predictions? Okay, let’s get into the study.
How we studied AI concern
We started by scouring the internet for expressions of concern about AI and compiling a list of common concerns, based on what we found (as well as our own background experience of hearing people express concerns). The potential concerns about AI that we identified are:
Proliferation of low-quality AI content (i.e., ‘AI slop’)
AIs plagiarising the work of humans (e.g., remixing the work of artists without compensation)
AI elimination of jobs
AI misinformation (including deepfakes)
People using AI but pretending not to have (e.g., to write school assignments)
AI used for authoritarian control (e.g., for monitoring and punishing populations based on behavior)
Relationships (often romantic) people have with AIs
Inequality caused by AI (such as by creating concentration of wealth)
AI ideological bias (e.g., favoritism toward progressive or conservative viewpoints)
AI bias and discrimination (e.g., by perpetuating unfair unequal treatment of different groups)
Concentration of power caused by AI (e.g., making those who control the most advanced AIs much more powerful than everyone else)
AI used for scams or to manipulate individuals (e.g., AI bots designed to seem like specific humans in order to trick people)
Ceding of more and more control to AIs (e.g., making major decisions impacting millions of people that humans no longer make)
Slaughterbots (i.e., weaponized AI drones)
Superintelligence (i.e., AI that outperforms the ability of humans in essentially all domains)
AI itself experiencing suffering when we train or run it.
Each of these concerns is described and explored in more detail below.
Each of these concerns is described and explored in more detail below.
While we were conducting this experiment (in October of 2025), some other concerns became more prevalent in discourse about AI, but these were not included in our study. The most notable of these that weren’t included in our study are:
The possibility that the monetary values of companies related to generative AI represent a ‘bubble’ that, upon bursting, will have disastrous consequences on the economy of the US or the world
The negative impacts of AI data centers on local communities (e.g., pollution, use of ground water)
The environmental impacts of AI, via the energy or water consumption of data centers
Children having increased access to inappropriate content
We recruited 403 participants through our participant recruitment platform, Positly.com, and started by asking them some general questions about their level of knowledge on the topic of AI and their overall concerns about its impact on their lives and society. After that, we showed them information about the 16 potential AI-related concerns we identified (one potential concern at a time, in a random order). For this, we assigned each participant randomly to one of two groups:
Short Definitions: 200 participants were shown just a short sentence defining each of the 16 concerns
Full Descriptions: 203 participants were shown the same short sentence definitions as the Short Definitions group and a longer description of each concern, containing examples. (We’ve included all of the full descriptions in this article, below.)
For each potential concern, participants were asked to indicate their level of actual concern about it on a 5-point Likert scale from “Not at all concerned” (which was assigned the value 0) to “Extremely concerned” (which was assigned the value 4).
Finally, at the end of the study, participants were asked again about their general levels of concern about AI (in their own lives and for society), to see whether participating in the study and seeing information about so many potential concerns changed their level of concern, and then they were asked some demographic questions.
Now, let’s dive into the results! We’ll start with results about overall concern (before diving into the 16 specific concerns).
Since this is a long report, we’ve included just the initial section here. To read the full report, go here: https://www.clearerthinking.org/post/study-report-what-concerns-people-about-ai
This image from the article is what I was most interested in:
Thanks for posting the link here, and doing the study in the first place!
I’m interested in the way that some concerns were very broadly shared, and others were up or down weighted by personal factors. I wanted texture and conjecture… so I added some!
The above were (1) the highest concern in general, and (2) not detectably linked to any personal factors, and (3) basically ORDERED, with any ordered pair being not statistically detectably more cared about, but any jump of two enabling a separation. Power Concentration was 2.74, Jobs was 2.8, Surveillance 3.05, Scammers 3.11, and Deepfakes 3.2.
Something that’s interesting to me is that I personally would INVERT these in importance?
The scammers and deepfakes seem like minor issues that happen to lots of people, but have small impacts, at least to me.
Whereas Power Concentration and Mass Unemployment seems very bad to me based on the likely near impossibility of preventing them, and their very broad and deep impacts.
Next:
These were lower yet broadly shared concerns, but they were packed tighter. Only the problem of “slop” stood out as less important (2.24). The rest had concern levels packed between 2.5 and 2.7 and might be re-ordered with better sampling of more people.
Skipping around a bit...
These had negative correlations with conservatism!
G11′s correlations were stronger (ie “more negative”), and also worked for “social conservatism” and “financial conservatism” independently.
I picked them out for mention first because these were the only negative correlations. If anything else had had negative correlations, I would have put that early too.
If you subtract non-conservatives (as in a political primary?) they get LESS attention (but only a bit, its not a strong correlation) and if you subtract conservatives from the poll they might rank a tiny bit higher.
My gut says that this tracks. Leftists are more into these two issues, in my experience, and proudly so, and prominent ideologues on the left, opining on AI stuff, seem often to mention these a lot and even downplay anything else.
These had positive correlations on “Being A Woman”. In a group of pure men they would get less concern, and in a purely female group they might go higher.
This doesn’t hang together “as a story” for me, except that women tend to be more on the left, and more in churches?
Which we will presently cover...
These were low overall, but positively correlated with religiosity (but NOT conservatism).
Compare the law Ohio legislators floated, to pre-emptively ban human/AI marriages (among other things).
Its like “seeing the obvious moral patiency of a sapient being that isn’t human” and “wanting to create social barriers anyway” go together maybe?!?! Fascinating.
This is the last grouping. Every single one of them EXCEPT for “Superintelligence” has showed up several times already in my listing! This all positively load on “Spirituality”. (S1 loads the most (0.25 correlation) and S4 loads the least (0.16 correlation).)
Slaughterbots also loads on “Being a woman”.
Relationships and Suffering also load on “religiosity”.
I think there’s an almost half-logical vibes based story that could be pulled out of this data (which I can spin as a story but don’t endorse (would be fun to try to test though maybe!)) where you imagine a church in the US (which is still primarily a Christian Country). Most churches are full of older women.
Those women, at church, are thinking about God (a theological echo of superintelligence?) but also feel sad about the suffering of Jesus (a theological echo of AI suffering during training?) and the Hand Of God reaching out to delete bad people from existence (Slaughterbots). Then of course they are church ladies, so they think about who should be romantic with whom (and they don’t want humans and angels mixing) ;-)