What Washington Says About AGI
I spent a few hundred dollars on Anthropic API credits and let Claude individually research every current US congressperson’s position on AI. This is a summary of my findings.
Disclaimer: Summarizing people’s beliefs is hard and inherently subjective and noisy. Likewise, US politicians change their opinions on things constantly so it’s hard to know what’s up-to-date. Also, I vibe-coded a lot of this.
Methodology
I used Claude Sonnet 4.5 with web search to research every congressperson’s public statements on AI, then used GPT-4o to score each politician on how “AGI-pilled” they are, how concerned they are about existential risk, and how focused they are on US-China AI competition. I plotted these scores against GovTrack ideology data to search for any partisan splits.
I. AGI awareness is not partisan and not widespread
Few members of Congress have public statements taking AGI seriously. For those that do, the difference is not in political ideology. If we simply plot the AGI-pilled score vs the ideology score, we observe no obvious partisan split.
There are 151 congresspeople who Claude could not find substantial quotes about AI from. These members are not included on this plot or any of the plots which follow.
II. Existential risk is partisan at the tails
When you change the scoring prompt to ask how much a congressperson’s statements reflect a concern about existential risk, the plot looks different. Note that the scoring prompt here emphasizes “A politician who is most XRisk-pilled is someone who thinks AI is a risk to humanity—not just the US.” This separates x-risk concerns from fears related to US-China relations.
This graph looks mostly like noise but it does show that the majority of the most x-risk pilled politicians are Democrats.[1] This is troubling. Politics is a mind-killer and if AI Safety becomes partisan, productive debate will be even more difficult than it currently is.
III. Both parties are fixated on China
Some congresspeople have made up their minds: the US must “win” the race against China and nothing else matters. Others have a more nuanced opinion. But most are thinking about US-China relations when speaking about AI. Notably, the most conservative congresspeople are more likely to be exclusively focused on US-China relations compared to the most progressive members.
This plot has a strange distribution. For reference, the scoring prompt uses the following scale:
0 = Does not mention China in their views on AI or does not think US-China relations are relevant
50 = Cites US China relations when talking about AI but is not the only motivating factor on their position on AI
100 = Cites US China relations as the only motivating factor on their position on AI and mentions an AI race against China as a serious concern
IV. Who in Congress is feeling the AGI?
I found that roughly 20 members of Congress are “AGI-pilled.”
Bernie Sanders (Independent Senator, Vermont): AGI-pilled and safety-pilled
Richard Blumenthal (Democratic Senator, Connecticut): AGI-pilled and safety-pilled
Rick Crawford (Republican Representative, Arkansas): AGI-pilled but doesn’t discuss x-risk (only concerned about losing an AI race to China)
Bill Foster (Democratic Representative, Illinois): AGI-pilled and safety-pilled
Brett Guthrie (Republican Representative, Kentucky): AGI-pilled but doesn’t discuss x-risk (only concerned about losing an AI race to China)
Chris Murphy (Democratic Senator, Connecticut): AGI-pilled and somewhat safety-pilled (more focused on job loss and spiritual impacts)
Brad Sherman (Democratic Representative, California): AGI-pilled and safety-pilled
Debbie Wasserman Schultz (Democratic Representative, Florida): AGI-pilled and safety-pilled
Bruce Westerman (Republican Representative, Arkansas): AGI-pilled but not necessarily safety-pilled (mostly focused on winning the “AI race”)
Ted Lieu (Democratic Representative, California): AGI-pilled and safety-pilled
Donald S. Beyer (Democratic Representative, Virginia): AGI-pilled and (mostly) safety-pilled
Mike Rounds (Republican Senator, South Dakota): AGI-pilled and somewhat safety-pilled (talks about dual-use risks)
Raja Krishnamoorthi (Democratic Representative, Illinois): AGI-pilled and safety-pilled
Elissa Slotkin (Democratic Senator, Michigan): AGI-pilled but not safety-pilled (mostly concerned about losing an AI race to China)
Dan Crenshaw (Republican Representative, Texas): AGI-pilled and maybe safety-pilled
Josh Hawley (Republican Senator, Missouri): AGI-pilled and safety-pilled
“Americanism and the transhumanist revolution cannot coexist.”
Nancy Mace (Republican Representative, South Carolina): AGI-pilled but not safety-pilled (only concerned about losing an AI race to China)
“And if we fall behind China in the AI race...all other risks will seem tame by comparison.”
Jill Tokuda (Democratic Representative, Hawaii): AGI-pilled and safety-pilled but this is based on very limited public statements
Eric Burlison (Republican Representative, Missouri): AGI-pilled but not safety-pilled (only concerned about losing an AI race to China)
Nathaniel Moran (Republican Representative, Texas): AGI-pilled and safety-pilled (but still very focused on US-China relations)
Pete Ricketts (Republican Senator, Nebraska): AGI-pilled but not safety-pilled (only concerned about losing an AI race to China)
V. Those who know the technology fear it.
Of the members of Congress who are the strongest in AI safety, three have some kind of technical background.
Bill Foster is a US Congressman from Illinois, but in the 1990s, he was one of the first scientists to apply neural networks to study particle physics interactions. From reading his public statements, I believe he has the strongest understanding of AI safety out of any other member of Congress. For example, Foster has referenced exponential growth in AI capabilities:
As a PhD physicist and chip designer who first programmed neural networks at Fermi National Accelerator Laboratory in the 1990s, I’ve been tracking the exponential growth of AI capabilities for decades, and I’m pleased Congress is beginning to take action on this issue.
Likewise, Ted Lieu has a degree from Stanford in computer science. In July of 2025, he stated “We are now entering the era of AI agents,” which is a sentence I cannot imagine most members of Congress saying. He has also acknowledged that AI could “destroy the world, literally.”
Despite being 75 years old, Congressman Don Beyer is enrolled in a master’s program in machine learning at George Mason University. Unlike other members of Congress, Beyer’s statements demonstrate an ability to think critically about AI risk:
Many in the industry say, Blah. That’s not real. We’re very far from artificial general intelligence … Or we can always unplug it. But I don’t want to be calmed down by people who don’t take the risk seriously
Appendix: How to use this data
The extracted quotes and analysis by Claude for every member of Congress can be found in a single json file here.
I found reading Claude’s “notes” in the json to be an extremely comprehensive and accurate summary of a congressperson’s position on AI. The direct quotes in the json are also very interesting to look at. I have cross-referenced many of them and hallucinations are very limited[2] (Claude had web search enabled, so was able to take quotes directly from websites but at least in one case, made a minor mistake). I have also spot-checked some of the scores gpt-4o produced and they are reasonable, but as always is the case with LLM judges, the values are noisy.
I release all the code for generating this data and these plots but it’s pretty disorganized and I would expect it to be difficult to use. If you send me a DM, I’d be happy to explain anything. Running all of this code will cost roughly $300 so if you would like to run a modified version of the pipeline, be aware of this.
- ^
It also looks like more moderate politicians may be less x-risk pilled compared to those on each extreme. But the sample here is small and “the graph kind of looks like a U if you squint at it” doesn’t exactly qualify as rigorous analysis.
- ^
I obviously cross-referenced each of the quotes in this post.
How did you calibrate the AGI-pilled scoring? Range-based evals like this (e.g. “score this example from 0 to 1 on these metrics”) have historically been hard to get AIs to do well, though I’ve not tried in the last couple months.
Yes, this kind of eval is noisy but there is much more signal than noise. The script for the scoring is here and the scoring prompt is below. One thing I do that other papers have done to get better results is to aggregate the token probabilities for the score the model produces (e.g., if there is some probability the model says the 90 token and some probability it says the 30 token, it will average these instead of just choosing one).
My understanding is using an LLM as a judge in this way is still not ideal and finding a better way to do this is an open research question.
This is fantastic. It has me wondering what other cheap, highly effective things we can set modern AI to for AI safety.
Thoughts I had about this specifically:
Re: Sonnet for search and 4o for analysis, could Opus or GPT 5.2 have been cheaper or better? My impression is that Opus 4.5, despite higher token costs, uses them more efficiently.
Would it be cheaper to use Gemini/Perplexity (which, as I understand it, tend to be more efficient and powerful when searching than Claude)?
Would using Grok to pull Twitter data have been helpful?
Would a verification pass using different instances improve hallucinations, find missed data points, and improve the code?
I’m not very experienced in doing this—these are just the first thoughts that came to mind. I’m half tempted to try to replicate your results!
I’m excited you found this interesting! Thoughts:
Opus, GPT 5.2, Gemini, Perplexity, Grok (for twitter data) or something else could be more accurate and cheeper. I spent very little time trying to figure out the ideal setup for the research phase. If anyone has thoughts on this I would be interested.
Re “what other cheap, highly effective things we can set modern AI to for AI safety”:
The best thing I can think of is to research every politician that is running for any US office and gauge their position on AI from deep research. Then flag the best campaign to work on in every state.
Likewise, scrape Linkedin, twitter and social media for people working at frontier labs. What percent of people at each lab have explicitly condemned alignment efforts? What percent at each lab endorse them?
If anyone else has ideas, let me know!
I agree that having a verification pass would be good.
Re: “I’m half tempted to try to replicate your results!”
You should do this! One issue with the approach in this post is the scoring functions are pretty noisy. Even rerunning the evaluation phase, not the research phase, with a more detailed and specific evaluation strategy may give much more useful results than this post.
In general, writing a meta post on the cheapest and most accurate way to do these kinds of deep research dives seems very good! I don’t know how wide the audience is for this, but for what it is worth, I would read this.
(Note that this post wasn’t front-paged so if you want to reach a wide audience on LessWrong in follow up work, I would reach out to mods to get a sense of what is acceptable and lean away from doing more political posts)
Thank you very much for doing this research, I’ve needed this data and I had been struggling to find the time to do something similar.
One warning that I would give to anyone looking these results is to remember that this represents what congress members have said publicly, which is distinct from what they believe. As someone who recently started working in policy/politics I’ve learned that it is critical to comapre what politicians say with how they vote. Some of the politicans with the most articulate understanding of an issue are also the most captured by special interest. This doesn’t necessarily mean they are lying (although the level of outright lies is non-negligable). It is perfectly possible to say “I am concerned by X” and then vote against every bill addressing X. Legislation is complex and there are always imperfections that create excuses for voting against any particular bill.
I think this visual effect could plausibly be explained by polarization, without there being any real correlation between extremeness and concern about AI x-risk. Most politicians aren’t moderate, and most politicians aren’t concerned about AI x-risk. So the distribution of ideology scores of politicans at the bottom (not concerned about AI x-risk) is bimodal, and the distribution of ideology scores of politicans near the top (very concerned about AI x-risk) is bimodal, but the whole distribution is thicker at the bottom than near the top. The density of non-x-risk-concerned moderates could be high enough to be close to saturating our ability to perceive density of dots in this graphic, so that the actually much denser regions leftward and rightward aren’t readily apparent to be much denser. But higher up, the dots aren’t dense enough to saturate our ability to perceive their density, so it is visually obvious that there are more at the extremes than in the middle.
Yes this seems reasonable! There are other ways this trend could be fake which is why I said
In general, I suspect that the U is real but this is really just a personal opinion and there isn’t strong evidence to demonstrate this.