Reasons to sell frontier lab equity to donate now rather than later

Tl;dr: We believe shareholders in frontier labs who plan to donate some portion of their equity to reduce AI risk should consider liquidating and donating a majority of that equity now.

Epistemic status: We’re somewhat confident in the main conclusions of this piece. We’re more confident in many of the supporting claims, and we’re likewise confident that these claims push in the direction of our conclusions. This piece is admittedly pretty one-sided; we expect most relevant members of our audience are already aware of the main arguments pointing in the other direction, and we expect there’s less awareness of the sorts of arguments we lay out here.

This piece is for educational purposes only and not financial advice. Talk to your financial advisor before acting on any information in this piece.

For AI safety-related donations, money donated later is likely to be a lot less valuable than money donated now.

There are several reasons for this effect, which we elaborate on in this piece:

  1. There’s likely to be lots of AI safety money becoming available in 1–2 years.

  2. Several high-impact donation opportunities are available now, while future high-value donation opportunities are likely to be saturated.

  3. Donations now allow for unlocking the ability to better use the huge amount of money that will likely become available later.

Given the above reasons, we think donations now will have greater returns than waiting for frontier lab equity to appreciate and then donating later. This perspective leads us to believe frontier lab shareholders who plan to donate some of their equity eventually should liquidate and donate a majority of that equity now.

We additionally think that frontier lab shareholders who are deliberating on whether to sell equity should consider:

4. Reasons to diversify away from frontier labs, specifically.

For donors who are planning on liquidating equity, we would recommend they do not put liquidated equity into a donor-advised fund (DAF), unless they are confident they would only donate that money to 501(c)(3)s (i.e., tax-deductible nonprofits). Money put into a DAF can only ever be used to fund 501(c)(3)s, and there are many high-value interventions, in particular in the policy-influencing space, that cannot be pursued by 501(c)(3)s (e.g., 501(c)(3)s can only engage in a limited amount of lobbying). We think the value of certain non-501(c)(3) interventions far exceeds the value of what can be funded by 501(c)(3)s, even considering multipliers for 501(c)(3)s due to tax advantages.

We understand the obvious counterargument to our main claim – that frontier lab equity will likely see outsized returns in the run up to powerful AI, such that holding the equity may allow you to grow your donation budget and do more good in the future. Nevertheless, we believe our conclusions hold even if you expect frontier lab equity to grow substantially faster than the market as a whole. In part, we hold this view because donations enable organizations and activities to grow faster than they otherwise could have, which lets them raise and absorb more funding sooner, which lets them have more impact (especially if donation budgets soon balloon such that the bottleneck on valuable AI safety work shifts away from available funding and towards available funding opportunities). In this way, donations themselves are similar to held equity, in that they serve as investments with large, compounding growth.

Below, we elaborate on our reasons for holding these views, which we outlined above.

1. There’s likely to be lots of AI safety money becoming available in 1–2 years

1a. The AI safety community is likely to spend far more in the future than it’s spending now

The AI safety community is likely worth tens of billions of dollars, but most of this is tied up in illiquid assets or otherwise invested to give in the future. Safety-oriented Anthropic shareholders alone are likely worth many billions of dollars (often with up to 80% of their worth planned to non-profit donations), Dustin Moskovitz is also worth over ten billion (with perhaps half of that to be spent on AI safety), and there remain other collections of safety donors investing in the future.

Yearly spending on AI safety, meanwhile, is in the hundreds of millions (spending on just technical AI safety research and related areas is ~$100M/​yr, and total spending on AI safety writ large is presumably some multiple of this figure). This yearly spending likely represents a few percent of the community’s wealth. This is a somewhat modest spending rate for such an altruistic community, which we think largely represents the fact that much of this wealth is tied up in illiquid assets. As some of these assets become more liquid (e.g., with Anthropic tender offers), the spending rate of the community will likely rise substantially.

Further, we think it is likely that frontier AI investments will do quite well in the coming years, ballooning wealth in the AI safety community, and raising donation amounts with it. If Anthropic were to 5x again from its current valuation and Anthropic employees had access to liquidity, it’s conceivable that several Anthropic employees may set up counterparts to Open Philanthropy.

1b. As AI becomes more powerful and AI safety concerns go more mainstream, other wealthy donors may become activated

We’ve already seen that as AI has become more powerful, more people have started paying attention to AI risk – including wealthy people. For instance, billionaire investor Paul Tudor Jones has recently begun talking publicly about catastrophic risks from AI. It’s plausible that some portion of people in this class will start donating substantial sums to AI safety efforts.

Additionally, the U.S. government (and other governments) may ramp up spending on AI safety as the technology progresses and governing bodies pay more attention to the issue.

2. Several high-impact donation opportunities are available now, while future high-value donation opportunities are likely to be saturated

2a. Anecdotally, the bar for funding at this point is pretty high

The sentiment from many grantmakers and people doing direct nonprofit AI safety work is that the current bar for funding is pretty high, and good opportunities are going unfunded. One way to get a sense of this is to browse Manifund for AI safety projects that are in search of funding. If you do this, you’ll likely notice some promising projects that aren’t getting funded, or are only partially getting funded, and many of these projects also have difficulty getting sufficient funding elsewhere. Another way to get a sense of this is to look at the recent funding round from the Survival and Flourishing Fund. Our impression is that many of the projects funded in this recent funding round are doing fantastic work, and our understanding is some of these projects were highly uncertain whether they would secure funding or not (from SFF or elsewhere).

We also are personally aware of organizations that we believe are doing great work, which members of the AI safety community are generally enthusiastic about, but which have been struggling to fundraise to the degree they want. If you want to check out some donation opportunities that we’d recommend, view the last section of this piece.

2b. Theoretically, we should expect diminishing returns within each time period for donors collectively to mean donations will be more valuable when donated amounts are lower

Collectively for the AI safety community as a whole, there are diminishing returns to donations within any time period. This phenomena is also true for specific donation categories, such as technical AI research, AI governance research, and donations for influencing AI policy. This phenomena is a reason to want to spread out donations across time periods, as the alternative (concentrating donations within time periods) will force donations into lower impact interventions. It’s also an argument against holding onto equity to donate later if many other AI safety advocates will also be holding on to highly correlated (or even the same) equity to donate later – in the event that your investments grow, so would other donors, making the space flush and reducing the value per dollar substantially (meanwhile, if no one donated much in earlier time periods, the missed opportunities of low-hanging fruit there may simply be gone).

2c. Efforts to influence AI policy are particularly underfunded

Spending by AI safety advocates to influence U.S. policy (through activities such as lobbying) is only in the ballpark of ~$10M per year. Enthusiasm for this area, meanwhile, has been rising rapidly – a couple of years ago, spending on it was basically zero. The amount spent on it could easily increase by an order of magnitude or more.

Insofar as you buy the argument that we should want to increase spending in earlier time periods where spending is by default lower, this argument should be particularly strong for interventions aimed at influencing policy. As an example of how this phenomena works in the AI policy space – the political landscape regarding AI is uncertain, and it isn’t clear when the best opportunities for passing legislation will be. For instance, it’s possible that at any time there could be a mid-sized AI accident or other event which creates a policy window, and we want AI safety policy advocates to be ready to strike whenever that happens. Certain types of interventions can help better position the AI safety community to advance AI safety policies in such an event. Spreading donations across time periods can help ensure the best of these interventions are pursued throughout time periods, increasing the chances that AI safety advocates will have a major seat at the table whenever such a policy window opens.

Notably, major figures in the AI industry recently announced intentions to spend >$100M in efforts to stave off AI regulations in the U.S. Until our side of these policy debates is able to muster a large response (say, a substantial fraction as much spending on influencing policy as what they’ve announced), we’ll likely be ceding a large portion of our seat at the table.

2d. As AI company valuations increase and AI becomes more politically salient, efforts to change the direction of AI policy will become more expensive

In worlds where your financial investments see huge returns, you probably won’t be alone in making large gains. Other interests will also likely see large returns, increasing the cost of changing the direction of society (such as via policy).

Even if you manage to beat the market by a huge factor, opponents of AI regulation may see similar gains to you, increasing the total amount donated to affect AI policy (on both sides), and decreasing the value per dollar donated on the issue, specifically. Notably, opponents of AI regulation include Big Tech firms (especially those with major exposure to AI in particular) as well as ideological accelerationists (who tend to have large exposure to both AI and crypto) – in order for your investment gains to give you a major advantage in terms of the value of influencing AI policy, you’d likely need substantial gains above those groups.

Again, this is an argument for AI safety advocates as a whole to spread donations across time, not for ignoring future time periods. But it does cut against the argument that investing now can lead to much more money and thus much more impact, as the money would wind up being less valuable per dollar.

Further, AI policy is currently a relatively low salience issue to voters (i.e., approximately no voters are actually changing their votes based on stances politicians take on AI). At some point in the future, that’s likely to no longer be true. In particular, after an AI warning shot or large-scale AI-driven unemployment, AI policy may become incredibly high salience, where voters consistently change their vote based on the issue (e.g., like inflation or immigration are today, or perhaps even higher, such as the economy in the 2008 elections or anti-terrorism in the immediate aftermath of 9/​11).

Once AI is a very high salience issue, electoral incentives for politicians may strongly push toward following public preferences. As public preference becomes a much larger factor in terms of how politicians act on the issue, other factors must (on average) become smaller. Therefore, donations to interventions to influence policy may become a relatively smaller factor.

Notably, money spent today may still be helpful in such situations. For instance, preexisting relationships with policymakers and past policy successes on the issue may be key for being seen as relevant experts in cases where the issue becomes higher salience and politicians are deciding where to turn to for policy specifics.

3. Donations now allow for unlocking the ability to better use the huge amount of money that will likely become available later

3a. Earlier donations can act as a “lever” on later donations, because they can lay the groundwork for high value work in the future at scale

The impact of donations often accrues over time, just like equity in a fast growing company. So even if the dollar value of the money donated now is lower than it would be in the future, the impact is often similar or greater, due to the compounding.

For instance, funding can allow for unblocking organizational growth. Successful organizations often grow on a literal exponential, so donating earlier may help them along the exponential faster. Further, donations aimed at fieldbuilding or talent development can allow for AI safety talent to grow faster, likewise helping these areas along an exponential. And even interventions that aren’t explicitly geared toward talent cultivation can indirectly have benefits in that domain for the grant recipient, potentially increasing the number of mentors in the field.

In the AI policy space, where reputation and relationships are highly valuable, early donations can also act as a lever on later donations. It also takes time to cultivate relationships with policymakers or to establish a reputation as a repeat player in the policy-space, and successful policy interventions rarely emerge fully formed without prior groundwork. Furthermore, legislative successes early on create foundations that can be expanded upon later. Many major government institutions that now operate at scale were initially created in much smaller forms. Social Security began modestly before expanding into the comprehensive program it is today. The 1957 Civil Rights Act, though limited in scope, established crucial precedents that enabled the far more sweeping Civil Rights Acts of 1964 and 1965. For AI safety, early successes like the establishment of CAISI (even if initially modest) create institutional footholds and foundations which can be expanded in the future. We want more such successes, even if their immediate effects seem minor.

If relationships and proto-policies are essential precursors to later, more substantial policies, then money spent on advancing policy now is not merely “consumption” but an “investment” – one which very well may outstrip the returns to investment in AI companies. If we don’t spend the money now, the opportunity to build these relationships and develop these early successes is lost forever.

4. Reasons to diversify away from frontier labs, specifically

4a. The AI safety community as a whole is highly concentrated in AI companies

The AI safety community has extremely concentrated exposure to frontier AI investments, creating significant collective risk. Outside of Dustin Moskovitz, most of the community’s wealth appears to be tied up in Anthropic and other AI institutions (both public and private). This concentration means the community’s collective financial health is heavily dependent on the performance of a small number of AI companies.

We think there is a strong case for being concentrated in the AI industry (both due to broader society under-appreciating the possible impact of AI, and due to mission hedging), but at the same time we suspect the community may currently have overdone it. From a community-wide perspective, moving some funds out of frontier AI investments increases valuable diversification.

And if AI investments dramatically increase in value, the community will be extremely wealthy. Due to all the reasons for diminishing returns to donations, that would imply each dollar donated would become much less valuable than today.

4b. Liquidity and option value advantages of public markets over private stock

Selling private AI stock and obtaining liquid assets creates substantially more option value, even for those who wish to remain leveraged on AI. Private stock holdings, while potentially valuable, lack the flexibility and liquidity of public market instruments, which is very valuable for being able to use the assets if strong opportunities arise (for either donations or personal use).

It’s entirely possible to maintain high leverage on AI performance using public markets. For example, investing in companies like Nvidia and Google can allow for maintaining large AI exposure while increasing liquidity. Note that most financial advisors would tend to advise against keeping much of personal wealth invested in one industry, but keeping it invested in one particular company is even riskier and would tend to be less advised. Admittedly, investing in public AI stocks would present less AI exposure in your portfolio than investing in hyperscalers. Of course, talk to your financial advisor before acting on this information.

For those uncertain about whether they’ll want to donate funds early or later, selling private stock when there’s an opportunity to sell and creating liquidity provides significantly more option value, even while remaining substantially leveraged on AI outcomes. As long as a donor puts a substantial probability that they will decide to donate more in the near term, there’s large gains to be had from liquidity and moving to public markets over private markets.

4c. Large frontier AI returns correlate with short timelines

In worlds where frontier lab returns are particularly high, timelines are likely short. In that case, the community will probably regret not having spent more sooner on interventions. On the other hand, if timelines are longer, it’s likely that holding frontier lab stock won’t be quite as good of an investment anyway.

4d. A lack of asset diversification is personally risky

From an individual perspective, the marginal utility of additional wealth decreases substantially as total wealth increases. For many safety-minded individuals with significant exposure to AI companies, reducing concentration risk may be personally optimal even if it means lower expected absolute returns.

This creates a case for some diversification even if it costs funds in expectation. Being better off personally with less variance, even if absolute returns are lower, can be the rational choice when facing such concentrated exposure to a single sector or company. The psychological and financial benefits of reduced variance may outweigh the potential opportunity cost of somewhat lower returns, particularly when those returns are already likely to be substantial.

Conclusion

While there is an argument for investing donation money to give more later, there are several counterarguments to prioritize donating now. Donors with frontier lab stock should carefully weigh these factors against expected investment returns, only retaining their frontier lab stock to invest to give later if they believe their expected returns are strong enough to outweigh the arguments here. We would also advise donors to think from a community-wide level – even if you think the community as a whole should retain substantial frontier lab equity, if the community is overinvested in frontier labs then you may think individual donors should liquidate to a substantial degree to better move the community in line with the optimal exposure to frontier AI.

Some specific donation opportunities

If you’re swayed by the logic in this piece and you want to give, we think the following donation opportunities are among the best places in the world to donate for advancing AI safety. All can also absorb more funds effectively. Notably, the largest philanthropic donor in the AI safety space by far is Dustin Moskovitz (primarily via his foundation Good Ventures acting on the recommendations of his donor advisor organization Open Philanthropy), and all the following opportunities either aren’t being funded by Dustin Moskovitz or are limiting the amount of funding they accept from Dustin. We therefore consider all the following opportunities to be at the intersection of impactful and neglected, where your donation can go a long way.

501c3 opportunities (i.e., tax-deductible nonprofit donations):

  • METR: METR researches and runs evaluations of frontier AI systems with a major focus on AI agents. If you work at a frontier lab, you’re probably aware of them, as they’ve partnered with OpenAI and Anthropic to pilot pre-deployment evaluations procedures. METR’s is particularly known for work on measuring AI systems’ ability to conduct increasingly long tasks. METR is also currently fundraising. (Note COI: Ryan Greenblatt has a personal relationship with someone at METR.)

  • Horizon Institute for Public Service: Horizon runs a fellowship program where they place fellows with technical subject-matter expertise in federal agencies, congressional offices, and think tanks, with a particular focus on AI. Their theory of change is to help address the US government’s technical talent shortage, with the hope that increasing policymakers’ technical understanding will lead to better tech policy. We believe that more informed tech policy will reduce risks from increasingly powerful AI systems.

  • Forethought: Forethought conducts academic-style research on how best to navigate the transition to a world with superintelligent AI. Much of their research focuses on concerns around AI-enabled coups and what may happen in an intelligence explosion. For many of these questions, which may become crucial as AI advances, Forethought is one of the only research groups in the world seriously researching the topic. (Note COI: Ryan Greenblatt is on the board of Forethought.)

  • Long-Term Future Fund: The LTFF is a grantmaking fund that aims to positively influence the long-term trajectory of civilization. In practice, the vast majority of the grants the LTFF makes are for AI safety. Many of the grants the LTFF makes are to smaller projects that would otherwise have difficulty securing funding despite being promising. (Note COI: Daniel Eth is a Fund Manager on the LTFF.)

If you’re open to donating to entities besides 501c3 nonprofits, there are also donation opportunities for influencing AI policy to advance AI safety which we think are substantially more effective than even the best 501c3 donation opportunities, even considering foregone multipliers on 501c3 donations (e.g., from tax benefits). If you’re interested in these sorts of opportunities, you should contact Jay Shooster (jayshooster@gmail.com) who is a donor advisor and can advise on policy-influencing opportunities.