The Center for AI Policy Has Shut Down

And the need for more AIS advocacy work

Executive Summary

The Center for AI Policy (CAIP) is no more. CAIP was an advocacy organization that worked to raise policymakers’ awareness of the catastrophic risks from AI and to promote ambitious legislative solutions. Such advocacy is necessary because good governance ideas don’t spread on their own, and to meaningfully reduce AI risk, they must reach the U.S. federal government.

Why did CAIP shut down? The reasons are mixed. Some were internal, such as hiring missteps. But others reflect the broader ecosystem: funders setting the bar for advocacy projects at an unreasonably high level, structural biases in the funding space that privilege research over advocacy. While CAIP’s mistakes played a role, a full account also needs to reckon with these systemic factors.

I focus on CAIP, as I think it filled a particular niche and was impactful, but there are many other advocacy orgs doing great work (see A5), and the core argument is that we need more of that work. Looking forward, impactful advocacy projects will likely continue to compete for a far more limited pool of funds than research efforts. That makes individual support a particularly high-leverage opportunity, and for those concerned with AI risk I’d seriously consider donating to AI safety (AIS) advocacy. The space would also greatly benefit from a CAIP 2.0 (an AIS advocacy organization willing to speak frankly about catastrophic risks) as well as an organization focused on developing advocacy talent.

Some brief notes:

  • For those not as interested in the CAIP bit, feel free to jump to the “Funders Have Set the Bar too High” section and read from there.

  • Our executive director Jason has already written extensively about much of this in his sequence, which I aim to partially summarize here as I also make my own case for the need for advocacy. My opinions are shared in a personal capacity.

  • My deepest gratitude to all of those who spent time reviewing and chatting through the implications of this piece, it’s truly much better for it.

Why Advocacy?

Before describing CAIP’s work, I want to briefly lay out the basic case for AIS advocacy (see A1[1] for how I’m defining “advocacy”). This is partly for readers unfamiliar with the space, and partly to ground disagreements in a clearer argument for why advocacy matters.

Why AI?

The continued development of AI could pose serious threats to humanity, potentially even existential risks. In response, there are two broad strategies: technical solutions, which aim to make AI models themselves safer, and governance solutions, which aim to shape the behavior of the companies developing those models. Doing work on both seems important.

Why Congress?

Governance efforts can focus on many different actors: state or federal legislatures, international bodies, standards-setting organizations, or benchmarking groups. Ultimately, though, the goal is always to influence the companies building frontier AI. While there is value in working across multiple levels, there is a strong case for prioritizing the U.S. federal government, and especially Congress. As Jason argues in his first piece: “Congress is the only institution that’s both powerful enough to reliably override the desires of multi-billion-dollar corporations, and whose decisions are durable enough that a victory today will still be relevant during the critical time period.”

Why isn’t research enough?

Even if Congress is the right target, the question remains: how do we increase the odds that AIS-focused legislation passes? To date, the movement has largely answered: “by doing more research into best practices”. To make that more concrete, the governance side of the AIS movement is currently investing 3x more into research (by number of FTEs) than advocacy[2].

This emphasis on research is understandable. Research is safer to fund: its downside risk is usually wasted effort, whereas advocacy can backfire, for example by making AI policy partisan or locking in a flawed regulatory regime. But at some point, the urgency of the risk requires taking those bets. If you believe there is a real chance transformative AI (TAI) could arrive in the next five years, is it wise to place nearly all resources on more research? Many funders appear to be acting as if TAI is still decades away[3]. Those worried about shorter timelines often face a tough choice: either support more radical responses like PauseAI, or back interventions unlikely to make the difference on sub-five-year timelines, with little in between.

This is not to deny the importance of research, which remains indispensable. But advocacy is far more neglected, even though it is a necessary bridge between good ideas and real-world change. As Jason argues in his second piece, good governance ideas do not spread automatically. They require translation, communication, and active promotion before they can influence legislation.

How do we get from ideas to laws then?

Consider the reality inside Congress. Before you can convince a staffer to support your idea, you first have to convince them it is worth even considering. Jason estimates that the average staffer may have only 20–30 minutes per year to think about AI[4]. That is barely enough to skim headlines, let alone evaluate sci-fi-sounding technical proposals about existential risk. And most governance research is still several steps removed from concrete, implementable policies. As a result, many promising ideas remain just that: ideas which are nowhere near sufficiently developed such that a policymaker whose interest could implement it.

Advocacy can expand that narrow window of attention. This can happen indirectly, by raising public awareness so constituents bring their concerns to offices, or directly, by building relationships with staffers and making yourself a trusted resource. Large companies can buy influence by hiring large numbers of former staffers as lobbyists, but advocacy organizations without those resources have to earn it through persistence and credibility.

Now you have a meeting with a staffer, but success is still far from automatic. Policymakers face countervailing pressures: AI companies, whose incentives often run against safety, will push back hard. SB 1047 made this clear: despite their rhetoric, AI companies’ interests often diverge from safety in significant ways. For a policymaker, opposing these companies means expending scarce political capital against actors with vast resources, high-profile CEOs, and broad public approval.

For this reason, the AIS movement needs more advocates: people who can build those relationships, communicate risks effectively, and translate abstract governance ideas into actionable legislative proposals. Enter CAIP.

What was CAIP up to?

CAIP was first of its kind in the sense that we were advocating for “strong AI safety legislation”, legislation which, if passed, would cause a meaningful reduction in the existential risk posed by AI. That mostly came through advocating for our model legislation, a licensing regime which would involve evaluations by independent auditors and implementation of other solutions like hardware security, liability reform, and emergency powers. Recognizing that such legislation is quite ambitious, we also drafted an action plan each year, outlining our support for smaller, more politically viable solutions. Our 2025 Action Plan focused on whistleblower protection, cybersecurity, and frontier model planning, drafting model legislation[5] for each. We worked to raise awareness of the risk and promote these priorities in multiple formats, but most importantly through direct meetings with Congressional offices, taking over 406 meetings with congressional staffers in our two years[6].

We also focused on generally raising awareness of the risks with staffers and educating about them[7]. On the hill, that looked like holding eight congressional briefings on subjects ranging from AI’s effects on education to cybersecurity to the workforce, reaching over 150 staffers[8]. We released reports alongside each of the briefings, distilling the existing research on areas like AI agents and autonomous weapons or AI’s effects on misinformation and music and analyzing how further developments in AI would be likely to affect the risk, and what policies might reduce it. CAIP also conducted original research, producing reports on whistleblowers, open source in the context of competition between the US and China, and the flaws in the US-China race framing.

Along with the briefings, we held more informal AI policy happy hours which facilitated mingling between staffers, researchers, and other relevant policymakers. We were exploring other events too, and hosted a first-of-its-kind AI Demo Day where AI safety teams from across the country demonstrated some of the risks from AI to staffers and key decision makers like Representative Bill Foster[9].

If you’d like to read more about what else we were up to you can read our 2024 Annual Report, but there are a fair number of other projects I haven’t covered here, like our OSTP response and recent letter, our many RFC responses, our district visits, our congressional campaign questionnaire, and many more.

Was the Work Impactful?

The most straightforward answer to the question of CAIP’s impact is that we don’t really know.

At its core CAIP was an organization set up for future impact, and I think we were making real progress on building the types of relationships that would have increasingly opened up opportunities for impact as we continued, and positioned us well for a critical moment[10]. That said, we can point to some clear positives. We introduced AIS ideas to staffers who had never engaged with them before, and encouraged others to engage more seriously than they otherwise would have. We also contributed via:

Changes to legislation

In three cases, we proposed edits to offices’ draft legislation that were accepted and that meaningfully improved the safety impact of those bills. For example, we secured changes that expanded the powers of a safety office, ensured financial independence for safety officials, and delegated new authorities to agencies capable of exercising them effectively[11].

Public endorsements for legislation

We publicly endorsed about a dozen bills with positive AI safety implications. Three sponsoring offices cited us by name in their official press releases, including full quotes, and the House Committee on Science, Space, and Technology (SST) linked to our endorsements for several of the bills they recommended. We were also the only listed endorser of the Nucleic Acid Screening Act, arguably the second most important piece of AI bio-risk legislation introduced in 2024. Beyond these citations serving as an indicator that CAIP’s opinion was being taken seriously, research indicates that bills with more endorsements tend to attract more co-sponsors, which in turn increases their chances of passage. Other state-level studies also suggest that interest group support helps bills move out of committee, so the SST Committee’s citation of our endorsement likely reflects a real, if modest, contribution.

Our model legislation

In addition to our regulatory regime proposal, we drafted model legislation on politically viable priorities such as cybersecurity protections, frontier AI planning, and whistleblower protections. I still believe our model legislation for a regulatory regime was particularly valuable. One of the biggest losses from CAIP’s closure is that, to my knowledge, no advocacy organization is currently iterating on concrete legislative frameworks that could substantially reduce risk.

Keeping AI top of mind

We endorsed two Congressional candidates who pledged to prioritize AI safety. One, Rep. Tom Kean Jr. (R-NJ), later posed sharp questions in a letter to AI CEOs, pressing them on their voluntary security commitments. His language echoed phrasing we had used in our candidate survey.

Media perception

Perception is important in DC, and when CAIP came in, media portrayals of AIS and EA associated organizations in the AI space were fairly brutal. Though CAIP and other organizations work, EA associated organizations went from being depicted as a naive or corrupt pet project sponsored by billionaires with vested interests, to being cast as the heroes who are looking out for the public interest[12].

For responses to common objections about the impact of our work, and an overview of our team, feel free to give those sections of Jason’s first post a read.

Why did CAIP Shut Down?

The immediate reason we shut down is simple: we did not have sufficient funding to continue operations. The deeper question is why the funding dried up. I think there are three main factors, with the truth likely lying in some mix of them:

  1. CAIP actually wasn’t that effective

  2. Funders have set the bar for advocacy too high

  3. Structural biases in the funding space

CAIP’s Failures

CAIP was far from perfect. We were a small, scrappy team learning as we went, and mistakes were made. In writing this post, several people familiar with the funding space suggested that our fundraising difficulties stemmed primarily from CAIP-specific shortcomings, rather than from broader opposition to advocacy or fixed “quotas” for different types of projects[13]. That has led me to update toward thinking that CAIP’s own mistakes played a larger role than I had previously believed.

Multiple people have mentioned CAIP’s most significant mistake was likely hiring too quickly, leading to employees mismatched to their roles and gaps in critical skill sets, such as technical expertise. These mismatches forced restructuring, which improved the situation in some ways but still left some team members in positions that did not suit their background.

Lacking a clear direction, different visions of how best to advance CAIP’s mission emerged[14]. This internal misalignment weakened our ability to advocate effectively and contributed to two founding team members leaving early. That’s sufficient reason for anyone to pause and question why that happened, and I wouldn’t blame any funder who marked that down as a negative on our spreadsheet[15].

I am sure CAIP made other important mistakes that weighed on funders’ minds as well[16], and any account of why CAIP shut down likely has to acknowledge them. That said, in no conversation about this piece has anyone argued CAIP was a net negative endeavor. I’m convinced that whatever mistakes were made may have limited our own effectiveness, but at least didn’t reflect negatively on the greater space.

Funders Have Set the Bar Too High for Advocacy

When I began writing this post, I assumed funders were broadly opposed to advocacy (see A2 for some original responses to such concerns). I now think that is too strong. Few seem entirely against advocacy, but many set such a high bar for quality that very few advocacy organizations can clear it[17], seemingly due to two concerns:

  1. Advocacy can crowd out other, more effective advocacy

  2. Advocacy is riskier than the average grant

Advocacy Orgs Taking up Limited Resources

The first concern, that advocacy groups “take up too much oxygen,” assumes that time with policymakers is zero-sum. If one group meets with a staffer, the next AIS advocate may not be given time to make their case. I don’t imagine that there’s no tradeoff at all, but I think the tradeoff is likely more concentrated between causes, or between opposing sides of a cause, rather than within the groups working towards the same goal. In fact I expect the effects of increasing interest in the area of AIS itself increases opportunities for other organizations. Inviting Senator Blumenthal to speak at our AI Demo Day might have meant he was less likely to accept another invitation to an AIS event, but I think he probably left the event slightly more interested in the issue due to engaging with the demos (and in fact has since attended other AIS events).

Advocacy is Too Risky

The second concern, that advocacy is risky, is more persuasive. Advocacy can backfire in serious ways, such as entrenching flawed policies or polarizing an issue[18]. I share these concerns and often asked myself whether our work risked increasing polarization. Given that it seems like there might be many unintuitive ways that an organization could increase partisanship, it’s reasonable to then have a fairly high bar for the quality of leadership at a given organization, such that you can trust that they’re cautious and politically savvy enough to avoid such risks. CAIP worked to mitigate the risk by deliberately cultivating relationships with both Democratic and Republican offices[19], which I think was a reasonable mitigation[20].

But it’s important to consider the default path. AI is entering the political arena regardless. Polarization risks may grow whether or not AIS advocates are involved. The real question is whether thoughtful advocacy increases those risks, or whether it helps mitigate them compared to the status quo[21]. These concerns warrant a higher bar for funding advocacy, I’m just not sure they warrant a bar as high as it currently is.

Biases in the Funding Space

Beyond legitimate concerns, structural biases also tilt the playing field against advocacy. Despite advocacy being a standard tool for social change, less than 3% of AIS funding goes to advocacy efforts (see A4 for details). That imbalance can be seen in:

  • The existing allocation of funding to organizations working on AIS. Jason’s work mentioned before estimated there are three researchers to every advocate in AI governance[22]. This is a conservative estimate, and I think the ratio is more realistically five to one[23].

  • Talent pipelines. There’s no fellowship to train people to go directly into advocacy for AIS, compared to over 10 such efforts aimed at research. We’re training AIS researchers by the 100s, and leaving advocates to figure it out for themselves.

  • AIS funders’ backgrounds. Major funders employ 4x as many academic researchers as advocacy experts[24]. This likely leads them to gravitate toward what, and who, they know.

  • Grant evaluation methods. Funding decisions often favor organizations that can show immediate, concrete outputs. That puts ambitious advocacy projects, whose impact is diffuse and long-term, at a severe disadvantage[25].

But I don’t want to give the impression that all funders have dropped the ball here. Only a few funders have decided to fully avoid funding advocacy, the rest having made at least some small bet and being open to further opportunities. The Survival and Flourishing Fund (SFF) made multiple bets on CAIP and other advocacy organizations and is clearly taking funding advocacy seriously, something I’m deeply grateful for[26]. And to the credit of the ecosystem as a whole, I am not aware of any funder that dismissed CAIP without at least a shallow evaluation, a small but meaningful indication that proposals were taken seriously. I also know that I have no right to tell them how their funds are allocated, merely the opportunity to make the case that advocacy deserves a more prominent place in AIS funding, and hope it might resonate.

What can we do?

This leads me to you, the individual reading this. If you’ve followed me this far, you likely agree that some degree of further investment in advocacy is warranted[27]. But how exactly can that be done?

You can of course try to change the mind of funders, and that’s at least part of the motivation for me writing this. You can also begin an advocacy career yourself, preparing yourself to effectively contribute down the line when further opportunities (hopefully) open up. If you’re well suited for either of these paths, they might be worth pursuing[28]. But from my vantage point, the two most promising ways[29] to support advocacy are: donating yourself and starting a new organization.

Those earning to give have a unique opportunity here to fill a serious gap in the AIS space. Even if major donors were to change course and fund advocacy more aggressively, many are constrained by structural factors largely outside their control. Donations to organizations primarily focusing on advocacy (classified as a 501(c)(4)[30]) aren’t tax deductible in the US[31], and donating too much to c4-type advocacy efforts risks an organization losing its own c3 status (for which donations are tax deductible). On top of that, DAFs, foundations, and corporate matching programs often can’t donate to c4s at all[32], effectively closing off ~38% of the US giving ecosystem[33]. This gives small donors a real chance to make a difference here[34], potentially deciding as to whether the next CAIP exists or not[35].

The challenge, of course, is figuring out where to give. Advocacy carries real risks of harm and much of the work must remain private, making evaluation difficult. This doesn’t bode well for a prospective donor, who need not just knowledge about AIS and policy, but potentially also polarization, a largely impractical ask for the majority of smaller to medium-scale donors.

The good news: anyone considering giving $100,000 or more[36] can reach out to me, and I’ll connect you with someone who can advise personally on donation opportunities in this space. But for smaller-scale donors like myself, the solution is trickier. Traditionally, one would establish or join a fund, pooling resources with others who share your priorities, then letting a manager with the necessary context and time direct them to the best opportunities[37].

The problem is that existing funds often under-allocate to advocacy[38], and starting a new fund would be ideal but would require a highly capable (and likely already time-constrained) manager. My best proposal, at least for now, is a small, informal group that shares recommendations for promising, low-risk advocacy donation opportunities. I’m planning to start something along these lines, if that interests you, please fill out this form.

For those who want to press ahead and donate directly, I’ve included an overview of current advocacy organizations in A3, with notes on why you might want to support them. Beyond donating to the c4s themselves, you can also donate to PACs associated with advocacy organizations, which helps increase congressional attention to AIS and demonstrates constituency strength[39]. Reach out[40] if you’d like to know more about PAC opportunities.

Personally, I’ll be donating in-part to the Secure AI Project moving forward, and saving another part in an attempt to help seed future advocacy org attempts (see below).

Start an Organization

Few will have the necessary skillset[41], but it seems clear that the space is in need of at least two additional orgs:

  1. An advocacy org focused on pushing more ambitious policy proposals (a “CAIP 2.0”)

  2. An advocacy org focused on building the talent pipeline

CAIP 2.0, the catastrophic-risk focused c4.

I’ve argued that we need more advocacy generally, but I also think the space is missing something important without CAIP, without an advocacy group that’s fully focused on catastrophic risk, framed as such rather than in more immediately palatable language. There are currently no advocacy organizations focused on developing and promoting strong AI safety legislation, legislation which might really give us a shot at controlling the risk. Though CAIP’s closure might discourage some from trying again, I think a well-structured effort could succeed. In speaking with others while drafting this piece, multiple people expressed excitement at the prospect of a CAIP 2.0.

I’m also happy to do what I can to help get any such organization off the ground. One of my last acts at CAIP was compiling a package of our internal documents, which can help accelerate any successor efforts. If you’re interested, feel free to reach out, and if I can’t answer your questions, I’ll try to connect you to someone from the team who can.

An Advocacy Talent Pipeline Organization.

Others have pointed out that the real bottleneck may not be funding, but skilled talent which can be trusted[42]. Most AIS advocates have 10+ years of experience, and there are few entry points for promising early-career candidates[43]. Without an organization to filter and place talent, we’re leaving significant potential impact untapped[44].

It’s not clear if there’s precedent for any such organization in other industries or spaces, but organizations like Talos (in European policy careers) or Tarbell (in journalism) show what’s possible with training and placing high-potential candidates in important roles. On the funding side, getting the pitch right is likely hard, but a positive is that Open Philanthropy appears open to seeing applications for starting such an organization. I’ve also got some limited further thoughts here to share, so feel free to reach out.

I believe the world would be much safer with these organizations in existence. If you think you have the experience to build one, I’d strongly urge you to consider it.

In Conclusion

In my view, spending less than 3% of our total funds on direct advocacy to reduce risk from AI is a mistake. CAIP gave me hope that we were on our way to correcting that imbalance, and I really do feel that the world is in a riskier place now with fewer advocates working on AIS. I’ve made my bet on advocacy as a driving force for AIS for the past two years, and plan to continue supporting it however I can. I hope you’ll join me.

Appendix

A1: What is Advocacy?

Advocacy is a fairly messy term, and I think that some disagreements over what we should and shouldn’t fund probably ground out in people imagining different things when someone says “advocacy”.

Advocacy in a very broad sense is the work of arguing for a cause or policy and trying to influence decisionmakers towards that policy. When I mention advocacy in this post, I’m referring to actions which involve both of those aspects, which differs from what others seem to define as advocacy[45]. We can also talk about sub-types of advocacy like: direct advocacy (a.k.a. lobbying) which involves taking a meeting with (or arranging events for) policymakers yourself, and grassroots advocacy which involves encouraging others to contact policymakers to encourage them to take a specific action on a given policy.

Note that actions which just argue for a given policy can be considered advocacy even if you yourself aren’t personally doing the advocating. I didn’t meet with many staffers personally, but the reports I was writing were given to staffers at our briefings, connecting the arguments I made to policymakers. That’s to say, research which argues for a given policy or taking action on a risk isn’t automatically advocacy, but can be if it’s part of a larger operation which ensures those thoughts are conveyed to policymakers.

A2: Responses to General Opposition to Advocacy

Of a range of concerns brought to bear, two main concerns stand out:

  1. Advocacy generally might not be an effective lever to promote change.

  2. Advocacy efforts could be net negative and lead to durable negative effects.

The Effectiveness of Direct Advocacy for Change

My response to the concern that advocacy might not be effective is something like “It might not be! But how do we get a better sense of whether it is or not?”. I haven’t looked extensively into the literature on the effectiveness of direct advocacy of the sort CAIP is up to, but I have looked into the literature on the effectiveness of citizens advocating to their legislators and the evidence there is mostly not well formed, with some reason to believe it might drive some change. I imagine the state of research on the effectiveness of direct advocacy is similar, indicating confident takes in either direction likely aren’t well founded, and that further evidence gathering is warranted[46].

Advocacy as Potentially Net Negative

A more specific worry is that advocacy for specific legislation runs the risk of locking in a regulatory regime that might be net negative as legislation can be remarkably durable at the federal level in the US (full repeals are quite rare, with less than one repeal on average per Congress historically). But here I would put the onus on the opposed to engage with existing legislative proposals to highlight how they demonstrate that risk. Someone concerned with the durability of the legislation could propose that we write in an automatic sunset clause, or necessary reauthorization periods, or make more specific comments on the multiple provisions we made specifically to try make good-faith changes to the regulator possible.

A3: AIS Grantmakers’ Positions on AIS Advocacy

Reviewing publicly listed grants, I found:

  • Open Philanthropy (OP) recommended a significant contribution (that Good Ventures made) to the Americans for Responsible Innovation, but hasn’t publicly listed any other grants.

  • The Survival and Flourishing Fund (SFF) has supported advocacy on multiple occasions, supporting c4s like CAIP, CAIS AF, Encode, and others.

  • The Future of Life Institute (FLI) has their own c4 but doesn’t seem to normally fund c4s, with the exception of recently funding Encode, indicating a potential change in direction.

  • Founders Pledge’s (FP) does not fund 501(c)(4) work. Their AI recommendations exclude 501(c)(4)s, and their GRC Fund payouts also exclude them.

  • The Long Term Future Fund (LTFF) doesn’t seem to have funded any c4s, and doesn’t seem likely to moving forward.

  • The AI Risk Mitigation Fund (ARM Fund) spun out of the LTFF, but doesn’t seem to be pivoting towards funding any advocacy projects.

  • Longview has no publicly listed grants to 501c4 advocacy organizations, but they don’t seem to publicly list most of their grants.

A4: The Estimate of Funds Spent on AIS Advocacy

This is a very rough estimate, most importantly because not all grants made by funders are reported publicly, and some even specifically flag that political advocacy might be the type of thing they wouldn’t report. I am also focusing on direct, or c4-type, advocacy here, and expect the estimate would be higher/​messier if you included the broader advocacy definition from A1[47].

Without fully running the math, my explorations of the publicly listed grants suggest that less than 2% of publicly reported AIS grants go towards direct advocacy. The actual percentage is almost certainly higher, given that there’s likely under-reporting and that individual donors aren’t covered here. Adjusting for that, I’d say I’m 90% confident that between 1-15% of AIS funding goes towards direct advocacy. Someone familiar with the AIS funding landscape confirmed this impression, estimating the number likely falls between 1-10%, with 2.5% as a best guess. With an estimated $400 million to $2 billion spent on AIS total each year, and a best guess of something like $800 million total per year, that would represent around $20 million going towards advocacy each year.

A5: Donation Options in the AIS Advocacy Space

This will be low context, as I myself lack a lot of the relevant context here and can’t share all of it publicly, but some organizations in the space are:

  • Americans for Responsible Innovation (ARI): ARI’s strategic aim is to be the one stop shop for AI policy in DC. That means their focus includes present harms, but also a broader portfolio of x-risk relevant work, e.g. targeting the AI x Bio risk overlap which otherwise goes fairly neglected. To do so, ARI has built up the biggest team working on the issue, bringing in significant traditional policy-making experience and pairing their advocacy efforts with a strong in-house policy research team.

    • Funding status: ARI’s received large grants for both its c4 and c3 work from Open Phil, making it likely the safest bet here but also meaning that they’re not as funding constrained at the moment[48].

  • Future of Life Institute (FLI): I’ve personally been continually impressed with Hamza Chaudhry’s work and think that there are few other advocates doing such impactful work. FLI is able to weave between a wide ranging amount of topics, and has helped set up a number of seemingly successful events, working with others in the space like FAS.

    • Funding status: FLI received a very large grant from Vitalik Buterin and doesn’t seem to have been very active in fundraising since, indicating a marginal donation to FLI’s c4 might not be as impactful. It’s at least not clear to me what further funding here would buy.

  • AI Policy Network (AIPN): AIPN has lobbied Congress on AI legislation since early 2024, working to integrate national security risks from AGI and ASI into mainstream political discussions while advancing politically viable proposals. Their lobbying efforts are led by Mark Beall, who brings over a decade of Defense Department experience, where he founded and directed DoD’s Joint AI Center. Mark has discussed risks from superintelligence and loss of control via Fox News and testimony before the House of Representatives CCP Committee, among other outlets.

    • Funding status: Actively looking for funding.

  • Encode: Encode’s niche seems to be taking a wide range of bets across a number of different levels (events, policy like influencing the NDAA, state level work like co-sponsoring SB 1047, contributions to executive AI policy, etc.). They also seem to focus on a wider set of risks, and are coming at things from more of a grassroots perspective, even if that isn’t their main focus.

    • Funding status: Encode received a fairly large grant from FLI and a smaller grant from SFF, so a marginal donation here might not be as impactful, but they are still actively looking for funding.

  • Secure AI Project (SAIP): If you were excited by SB 1047 and want to see more work done at the state level, SAIP is probably your best bet. SAIP is a policy development and advocacy organization, focused on passing AI safety bills in state legislatures. They are co-sponsoring CA SB 53 (Scott Wiener), which is up for a final vote in the CA legislatures, and worked closely with NY Assemblymember Alex Bores and Senator Andrew Gounardes on the RAISE Act which is pending final approval from the governor.

    • Funding status: They are seeking funding for their 2026 and 2027 operations. They’re especially excited about bringing on more individual donors at any amount (donation link here) as it would show a broader base of support.

  • Center for AI Safety Action Fund (CAIS AF): CAIS AF is well placed to focus on the national security angle of AIS, with a current focus on chip security, supporting multiple promising efforts towards e.g. location verification and increased BIS capacity. Their work is guided generally by Superintelligence Strategy and benefits from guidance from leading experts like Dan Hendrycks. Much of their work can be seen as helping set up the tools necessary now to be able to better execute on that strategy when the time comes.

    • Funding status: Actively looking for funding.

  • PauseAI US: A nationwide advocacy group focused on both grassroots advocacy and direct lobbying. They advocate for a global treaty banning the development of AGI and ASI, until we know how to keep this technology safe and in humanity’s control. They have 15+ local groups across the US, which run public info sessions and action workshops getting people to contact their elected officials and engage in district lobbying. Their DC and in-district efforts have yielded over 100 meetings with policymakers, at both the federal and state level.

    • Funding status: Actively looking for funding.

  1. ^

    A capital “A” with a number will reference particular sections of the Appendix.

  2. ^

    It is important to flag that what counts as “advocacy” and as “research” here is a blurry line. Organizations like RAND and CSET produce research that involves coordination with policymakers. My (largely uninformed) impression is that most of this work would be two levels removed, but I could be mistaken. RAND and CSET’s 501c3 statuses make clear that only a fraction of their work is going to c4-type advocacy, providing some degree of support for that impression. Either way, the 3x estimate is a quite conservative estimate, so I think that it likely holds up. That does mean that “research into best practices” might not best capture existing funders’ preferences though, as organizations like Open Philanthropy have significantly in organizations like RAND and CSET whose work is better connected.

  3. ^

    Not only by allocating a disproportionate amount of resources to research, but also by targeting smaller, more achievable policy wins within the advocacy work that is being done. CAIP was doing the entirely neglected work of preparing legislation that’s outside of the window of possibility but which could represent our best guess at legislation which would actually address the risk head on and fully. We need to also be taking bets on pushing more ambitious policies which are outside of the current realm of feasibility.

  4. ^

    Here I’m referring to a staffer which manages a portfolio of multiple issues. There are certainly some staffers, generally in offices that are more interested in AI, who are able to dedicate much more time to AI specifically, even hours per week, but these staffers are not the norm.

  5. ^

    Model legislation is taking that final step to making the idea as concrete as possible, drafting a “model” or example bill which puts the idea into legislative specifics and gives offices interested in the idea a sort of first rough draft from which they can take inspiration or further work through specifics to prepare a bill they’d actually introduce.

  6. ^

    And the vast majority of these in our second year when we finally had a more fully rounded out team.

  7. ^

    Another way we did this was through our media efforts. We raised awareness more broadly through our presence on Linkedin, where we would share some of the 122 blog posts we wrote, reaching over 8,000 followers. We also put out over 70 weekly newsletters which were going out to 2,000 followers by the end, alongside 16 podcast episodes. This activity garnered us features in 46 pieces of earned media in the end, in outlets like Politico, Wired, The Hill and FOX.

  8. ^

    By taking on issues that were already of interest to staffers, we were able to connect existing areas of concern to AI and broaden the tent of those concerned about AI and grow our own network.

  9. ^

    That was part of our larger grassroots organizing efforts which were just taking off, which would have supported these AI safety teams to scale up presentation of their demos, further raising awareness in their own communities. We had also begun work on grasstops organizing, forming relationships with various stakeholders in industry like the RIAA and Palo Alto Networks, as well as professional groups like the National Association of Social Workers.

  10. ^

    I think we had at least enough to show to justify another year of runway.

  11. ^

    We were also actively involved with the FY2026 appropriations process and the FY2026 NDAA, which are essentially the only two bills that Congress must pass every year. Several Congressional offices specifically and proactively invited us to submit proposals for adding AI safety measures to defense funding or to general funding. We submitted fifteen such proposals.

  12. ^

    Though I’ll note that someone reviewing this piece gave a fairly convincing alternative reason for the change. Early on, EA and AIS were perceived as dominant players in the AIS policy space, but it became increasingly clear that wasn’t the case over time and that industry was more in the driver’s seat, setting the sights of negative journalists on them.

  13. ^

    This is further supported by the fact that multiple other advocacy organizations have made it off the ground and haven’t seemed to publicly struggle with funding.

  14. ^

    At least in terms of the second scaling up of the team, I’m not sure slow growth was the obvious correct choice at the time. It seems hard to have predicted a change from very eager six figure donations in 2024 to wide-spread rejection in 2025. It also seems reasonable that a large budget might be necessary for an operation like CAIP where the MVP requires a larger set of skills than an average research focused org.

  15. ^

    I realize I’m leaving this unpacked here, but I don’t think I’m the right person to speak for anyone, nor am I even sure it would be a positive to adjudicate differences publicly. One team member said they left because they: “updated down on the tractability of passing significant legislation in Congress in the next few years, were not excited about our team, and didn’t feel like advocacy was the right personal fit”.

  16. ^

    Though I don’t think this impression is widely shared, I personally think CAIP could have done a better job of engaging with the EA & LW communities. The odds that doing so would have garnered sufficient widespread individual-donor support for CAIP’s mission is perhaps low, but something potentially worth betting on. I also expect that engaging further could have led to earlier conversations about CAIP’s impact and various doubts which could then have potentially been addressed.

  17. ^

    By high bar, I mean that leadership is expected to be super-star level, and the organization is expected to be staffed by people with significant AIS or Hill experience.

  18. ^

    Speaking about advocacy as the practice of raising awareness of a risk en masse, Benjamin Todd (80,000 hours) has noted that the risk flows from ideas, once they’ve been put out, being really hard to retract, which taken with first impression being really important means that early advocates on an issue are working at a particularly pivotal and important time. Applied to direct advocacy, this means meetings about AIS where you’re the first to raise the issue with an office might be especially important and risky, and should perhaps require extra caution and preparation.

  19. ^

    Failure to cultivate relationships with both parties seems to be a plausible explanation for what lead to climate change becoming partisan.

  20. ^

    Though I think there are real questions about what AIS organizations will do in response to potential future scenarios which could be risky, e.g. “How do organizations relate to a potential leading figure of one of the political parties coming out strongly in favor of AIS, i.e. how do they react to the Al Gore of AIS?”. I think it’s more than reasonable for funders to worry about situations like this, but also would expect the proper response would be to incentivize research in the direction of resolving some of these uncertainties, rather than writing off advocacy generally as too risky.

  21. ^

    For example, those more on the AI Ethics side of AIS seem to already have a pretty strong left-leaning bias. Do we expect that they’ll naturally correct for that as they begin messaging their concerns to policymakers?

  22. ^

    I think this is likely reasonably accurate across the space, but I am less familiar with the European context here and how much time certain European think tanks spend on advocacy (e.g. how much of Pour Domain’s work is focused on working with policymakers).

  23. ^

    Using a quick estimate for the technical research side, I expect that across the space as a whole we likely have 13 total researchers for every advocate.

  24. ^

    Though I think funders really are making progress here, scaling up their hires of those with advocacy/​policy relevant experience recently, Longview’s recent hiring round was especially notable.

  25. ^

    This doesn’t mean that we should fund organizations entirely on trust and evade evaluating them for effectiveness entirely. It means that traditional methods for how to evaluate more uncertain opportunities must be adapted to advocacy, and that such methods can’t place so high a bar as to lock out all but the surest bets.

  26. ^

    I was largely disconnected from our fundraising efforts, so I can’t comment on how each funder handled requests. But I personally have to give thanks to Longview, which seemed like they did what they could even if it didn’t lead to funding in the end.

  27. ^

    When I say further advocacy is warranted here, I’m mostly talking about direct advocacy, and to a lesser extent grassroots advocacy. I’m far less sure about more general advocacy efforts done via pathways like journalism. They might be impactful but it seems to me that the audience is really important here and hard to track from the outside whether these efforts are reaching the right people.

  28. ^

    The experience with CAIP has highlighted to me just how important some of these grantmaking roles are. You could try to become a grantmaker yourself, but there are naturally not very many positions and they require a depth of context in the grantmaking area that’s not easy to obtain. One pathway could be working to gain traditional grantmaking experience, and then leveraging that skillset to try to make it into one of these roles, but I’m unsure if that would be enough to be competitive with candidates which have more experience in the cause area.

  29. ^

    Beyond carrying on, and engaging in, this conversation.

  30. ^

    When we say charity, we are referring to a number of different types of organizations, but generally focused on a type of charity categorized as a 501(c)(3) in the US. Such organizations can conduct advocacy, but must make sure to stop short of spending a “significant” amount of their resources on lobbying. Such lobbying can be direct (i.e. communicating with a staffer or government official) or grassroots (i.e. communicating with the public about legislation and asking them to take action based on it).

    Those organizations which wish to spend more than 20% of their budget on lobbying can still register as a charity, but as a 501(c)(4) instead. The major practical difference between the two is that donations to 501(c)(4)s are not tax deductible. For organizations with a larger resource pool, they can simply set up sister c3 and c4 organizations, where they house lobbying efforts within the c4 and then keep the majority of the operation within the c3. Without the ability to do so, you lose access to donors whose donations hinge on tax deductions.

  31. ^

    Nor for most (and potentially all) other countries.

  32. ^

    I haven’t confirmed this, but someone familiar with the funding space thinks that donating to c4s through these vehicles is at least more complicated, if not generally restricted.

  33. ^

    The (very) rough estimate: foundations were reported to move 19% of donations in c4, DAFs 18% (estimated at 27% of individual giving in 2022, in 2024 individuals were 66% of total giving, so 27% * 66% = ~18%), and corporate campaign matching programs ~0.5% (very rough, totalling an estimated $2.86 billion of $592 billion in giving total in 2024, so 2.86/​592 = ~0.5%).

  34. ^

    Moreover, t small donors can also strengthen the position of an organization, helping them demonstrate support for their cause from a broader coalition.

  35. ^

    Personal heroes here would be Michael Dickens and Peter McCluskey who did their best for CAIP at the end.

  36. ^

    This doesn’t have to be a single sum, it can also be over the course of a year.

  37. ^

    I personally run all of my donations through funds right now because I don’t feel that I have the adequate time/​expertise/​context to evaluate the sort of organizations I’m interested in supporting. I’m looking to move at least slightly away from that moving forward, but generally I’m a big proponent of “let someone who has much more context and time evaluate this”.

  38. ^

    More general funds might be best to allocate to the most impactful opportunities across the ecosystem normally, but given that they’re restricted in how much they can spend on advocacy, and that they face more significant risks than individual donors do for such opportunities, we can expect that advocacy will be subjected to a higher bar for funding than other opportunities. Though I will note, this becomes increasingly less true the more major funders seek to advise HNW/​MNW individuals, which they can then strategically direct to these opportunities.

  39. ^

    One reviewer mentioned this was perhaps “the biggest bottleneck for advocacy orgs”. I’m even less knowledgeable about this area as CAIP didn’t have an associated PAC, but am even more concerned about PAC contributions for reasons of potentially increasing partisanship. Balancing my lack of knowledge against my worries, I come out somewhat neutral on whether or not to donate to PACs at the moment, and myself will be directing my donations to the c4s themselves.

  40. ^

    tristan31500@gmail.com

  41. ^

    Someone familiar with the space gave their quick assessment of what it would take for both: someone with demonstrable experience moving policy in DC and/​or influencing public narrative, ideally also with significant management experience. They’d also need significant context on the policy substance of global catastrophic risks, but that might be achievable with a few months of focused effort. The best version would likely be a pair which combines the two traits.

  42. ^

    One reviewer mentioned how hard it would likely be to vet candidates as part of a talent pipeline, considering that it’s already hard to balance between a variety of factors in selecting fellows for a research fellowship without the higher downside risk that comes with platforming the wrong person which might extend beyond the program itself to its funders and potentially the broader AIS movement.

  43. ^

    It’s not clear if the limited number of roles means that this project would be more or less impactful. As one reviewer remarked, there’s a Catch 22 here where you need to be a trusted, experienced advocate to get funding for a project or role at an advocacy org, giving you no good place to start. I’m really quite unsure, but I think that a plan that makes sense to be would be: such an organization running one round of such a fellowship to then create a pool of such advocates which could go on to create further advocacy projects themselves, which could then be filled by further iterations of the fellowship from the talent pipeline.

  44. ^

    It’s true that Horizon is playing a helpful role in this space, but only incidentally as experience in policymaking can be a helpful background to then later pivot into advocacy.

  45. ^

    E.g. Benjamin Todd (80,000 hours) defines advocacy in a career context as “trying to get into jobs where you have a public platform of some kind, such as being a journalist, or working at a campaigning organization, and then using that to promote important ideas”. Other seem to group things like communication more broadly, or certain forms of translation into this bucket as well.

  46. ^

    There’s also a very basic reason to think it might be effective: companies invest significant amounts in lobbying.

  47. ^

    Someone familiar with the AIS funding space estimated that efforts to fund advocacy more broadly are probably above 5% of the total portfolio, perhaps 5-15%.

  48. ^

    Though there are other funding opportunities to support their work that are considerably more impactful and currently funding constrained. Feel free to reach out if you’d like to explore donating to ARI.