Manifund is launching a new animal welfare fund, led by regrantor Marcus Abramovitch. We make rapid (<1 week), early-stage ($25k–$150k) grants across animal welfare, with a particular interest in the intersection of animals and transformative AI.
Reach out to marcus.s.abramovitch@gmail.com if you’d like to donate!
Why AI x animals?
Many EAs take seriously both the welfare of animals, and the possibility of short AI timelines. But EA funders currently consider these in isolation. AI safety grants mostly ignore potential outcomes for non-human beings. And animal welfare grants assume business-as-usual, that our world in 10 years mostly looks like the world today.
We don’t expect this to be the case. One major goal of the fund will be to identify and create opportunities so that transformative AI secures good outcomes for animals. Some example projects we’d like to fund:
Animal harm benchmarks. There are only a handful of animal harm benchmarks, none of which adopted by frontier labs. Other benchmarks that are well known and used (SWE bench, FrontierMath) came about through rising to the top of a marketplace of benchmarks. The same should happen with animal welfare benchmarks. Many benchmarks should be created, some by established ML engineers with the goal that one or two get traction to “hill-climb” on.
Animal Welfare Constitutions: Recently, Claude’s constitution was published with a value of “Welfare of animals and of all sentient beings” when determining how to respond to a prompt. This is one line of an 84 page document from one frontier lab. There should be ready made versions of texts of various lengths for constitutions, system cards, etc. to improve model behaviours and considerations for animals.
Watchdog organization: As AI begins to take effect across industries, there is a good chance the factory farming industry and others will start to use AI in ways beyond Precision Livestock Farming that will be important to get out ahead of. Keeping an eye on industry practices as well as effects on wild animals will be important to identify high-leverage, urgent interventions
Animal welfare salience in AI labs: Assuming AI systems are going to have profound effects on the world, it is important for those shaping the technology to be aware of and care about issues related to animal welfare as they are developing a technology with potentially large lock-in effects
(We also expect to place some bets on non-AI opportunities that are unusually strong.)
Why rapid?
One of the top complaints among grantees is the glacial pace of funding decisions. To a founder deciding to leave their job or making their first hire, a quick response can be make-or-break. In other domains, Tyler Cowen’s Fast Grants and Jueyan Zhang’s AISTOF show that multi-month-long reviews don’t have to be the default. In the for profit world, VCs similarly make decisions incredibly quickly.
By having one directly responsible individual for this fund, we eschew the overheads in typical grantmaking. As a Manifund regrantor on AI safety, Marcus has turned around funding decisions <1 week; Manifund is able to wire funds in <3 days after that. We’re bringing this speed to the animal welfare space to serve early-stage orgs.
Why Marcus?
This fund represents a bet on Marcus’s taste and execution. He’s already funded many successful early-stage projects, and is fluent in both animal welfare as well as AI/AI safety issues.
Marcus has been a hardcore earn-to-give EA. He’s personally donated ~$1.5m, representing >60% of his lifetime earnings, primarily to animal welfare. He earned this money through poker, cryptocurrency/quant trading, prediction markets, and advising a family office. (He was, for a time until he quit, the #1 trader on Manifold by all-time profit.)
Animal track record. Marcus has been an early backer of many projects that are now considered standout animal welfare charities, including:
Shrimp Welfare Project — electrical stunner placements now spare ~3.3 billion shrimp/year
Society for the Protection of Insects — state-level bans on insect factory farming
Compassion Aligned Machine Learning — animal-welfare evals for frontier AI
AI safety regranting record. This highlights Marcus’s eye for talent and understanding of frontier AI development. From a $100k Manifund regranting budget in 2023, Marcus funded:
Marius Hobbhahn, then starting Apollo Research
Jesse Hoogland, then starting Timaeus
Joseph Bloom, who went on to lead Whitebox Interp at UK AISI
Lisa Thiergart, who went on to lead MIRI’s technical governance team
Marcus also nudged his friend Ege Erdil to start Mechanize, and offered them their first investment.
Compared to other funders
We’re fans of the EA Animal Welfare Fund, the Navigation Fund, CG Farmed Animal Welfare and others in this space. We’re starting this fund as an alternative, for several reasons:
First, AI x animals. Others don’t currently prioritize interventions that focus on a transformative AI world. We’re much more AI-pilled and expect there’s a lot of low-hanging fruit for this reason. The AI x Animals RFP and SFF’s 2026 round seem good, but neither are currently fundraising.
Second, speed of deployment. We think that there is a need for much faster deployment of funds given our timelines for transformative AI. Especially when it comes to piloting new projects and starting new orgs, we need to move as fast as the AI landscape is moving to support effective interventions.
Third, transparency. As with other grants on Manifund, every grant and rationale by this fund will be made in public on our site, in real time. Donors and grantees will be able to evaluate our decisions for themselves. We think this is good for the ecosystem as a public benefit to build trust, share information and give potential donors a much better insight to what we are doing.
Fourth, active grantmaking. Marcus plans on reaching out to promising individuals rather than primarily taking inbound applications. He has a wide network to draw upon, across the animal welfare, AI, and AI safety ecosystems.
How to donate
Reach out to marcus.s.abramovitch@gmail.com if you’d like to donate, or book a call here.
We’re targeting an initial $2m raise by May 15. Marcus is taking no salary; Manifund runs ops and fiscal sponsorship with a 5% overhead.
Manifund is a 501c3 registered charity (officially “Manifold for Charity Inc.”, EIN 88-3668801; we can accept donations through DAFs, direct wire/bank transfer, crypto, and credit card.
I’m not sure why this is under the “AI Safety regranting record” section since Mechanize focuses on capabilities research and has been skeptical of safety efforts. Take for example their section from a early post:
I wouldn’t say I “nudged” him. He was doing it. I invested since I thought it was a good investment (it has been). They had no problem raising money, and my investment replaced part of one of the other investors’ cheques.
I wouldn’t have included this, especially since it’s a private investment, but Austin really wanted to.
I have donated a lot of money recently to animal welfare (~$450k in the last 5 months). I would have donated less if I had not had this investment.
Mechanize sells environments to AI labs (this is where all revenue comes from) and so if you think investing in the labs is ok, so should investing in Mechanize.
I included this story as a short anecdote about Marcus’s ability to spot talent, make active investments, and convince founders to take the leap, all of which I expect to transfer into helping start great AI x Animal orgs. I understand that different people in EA/AI safety have different takes about whether Mechanize specifically is good or bad—I happen to think good or at least neutral.
(And I take responsibility for any factual errors with this specific anecdote. Talking to Marcus just now, it seems like his main nudge was to convince Ege/Matthew/Tamay that the nonprofit structure was wrong for what they wanted to accomplish.)
Thanks for clarifying this.
I personally don’t endorse investing in labs in order to later donate to AI Safety/Animal Welfare causes, for reasons similar to those discussed by Wei Dai here, and have turned down opportunities to do so. But reasonable people disagree on this topic.
Some things I believe:
It’s good that this fund exists.
More people should be thinking about the impact of AI on sentient beings.
Model constitutions are unlikely to be relevant to ASI.
More generally, pretty much everything AI x Animals people are currently working on (AFAIK) is unlikely to be relevant to ASI.
There are a lot of people on LessWrong who have a better picture of what might be relevant to ASI, and I’d like to see comments from them on what sort of direction they’d want to see for Falcon Fund or for orgs in the space.
My current top things I am seeking to fund in AI x Animals work.
I want a better benchmark for animal harms, made with input from lab employees. I gave CAML some funding earlier. I think it was okay as a first pass, but nowhere near good enough. I think this will be expensive to create, but good.
Sentience Charters/Constitution/Lobby the labs to put things in system cards, constitutions, make classifiers for prompts that involve animals, etc.
Humane tech: Welfare tech should be made by welfare people. We should be actively involved in industry, shaping methods, practices and tech (stunners, ovo-sexing, genetics, etc.) should be made by us so we make good decisions.
Insects/Neglected Species work.
This is something I’ve written about before (e.g. Which types of AI alignment research are most likely to be good for all sentient beings?) but there are LessWrong regulars who could provide much better insight than I can.