An Opinionated Review of Many AI Safety Funders (with recommendations)[1]
Manifund: Somewhat mixed. Nice interface, love the team, but many of the best regranters seem inactive, some of the regranter choices are very questionable, and the way it’s set up doesn’t make clear to applicants that a person with a budget might not evaluate them, or give stats on how hard it is to get funded there. The quality of choices of regranters seems extremely variable. You’ve got some ones with imo exceptional technical models, like Larsen, Zvi, and Richard Ngo, but they don’t seem super active in terms of number of donations. A bunch of very reasonable picks, some pretty active, like Ryan Kidd. And some honestly very confusing / bad picks, like the guy who quit epoch to make a capabilities company which if it succeeds will clearly be bad for the world (it’s literally just saying “may as well automate the economy, it’s gonna happen anyway, if you can’t beat ‘em join ’em”, while totally failing to grasp or possibly not caring about this leading to humanity having a very very bad time), and has a lot of very bad takes (in my opinion and the opinions of a lot of people who seem to be thinking clearly). Leopold Aschenbrenner, who’s high profile, but I’ve seen many people comment is likely making the world a lot worse by building an unhelpful kind of hype and pushing the world towards race dynamics, and who has now pivoted to running an AGI investment fund.
Anyway yeah, rec: feels like the selection process is too focused on “high profile” and not enough on gears models of why this person is actually good for x-risk. Also some lack of keeping the really good people engaged? Maybe you could give regranters a way to get a digest emailed to them of things relevant to their interests? Like they input a prompt and it acts as a filter, giving them a few things to check weekly? Rather than the current system where you need to reach out directly to them to get them to evaluate anything.
OpenPhil: On the bright side, really nice transparency along with an amazing volume of funds which does boost some great people, especially in meta/fieldbuilding and the career transition grants I’ve seen some hits. They also have far more money than anyone else, but overall it seems relatively inefficiently spent. Lots of emphasis on evals and interp and academics who are doing work which can’t possibly scale to strong superintelligence, and often don’t seem to have theories of change (in the original Aaron Schwartz sense of working back from a win condition and figuring out how to get there, where it’s juxtaposed with theories of action, which are locally reasonable-looking next steps which seem like they might go towards your goal) which are aiming at win conditions. Over-reliant on legible signals of worthiness, missing inside view+willingness to pick weird but actually highly impactful stuff, and lots of people on the ground being very confused and concerned by their funding priorities. I briefly reviewed all their grants over the past ~5 years about a year ago, it used to be a lot better before Holden left. Whatever happened that led him to leave, I wish hadn’t happened and I hope that Dustin decides to do whatever it takes to get him back. I think it still probably does more good than harm (unlike Habryka’s understandable take), but very mixed and definitely huge room for improvement.
Main rec is to fund actual alignment work (as stated clearly by Rohin Shah), and hire some of the very rare people who can evaluate conceptual research on technical merit not general signals like credentials. Bonus would be to publish lists of things that OpenPhil grantmakers think would be good EV to fund but they don’t because Dustin doesn’t want to fund them (e.g. in domains where he’s vetoed like governance orgs willing to work with the right, or rationalist community building). It’s very fair for him to not want to fund things he doesn’t want to defend, but making it clear and public what his grantmakers think is actually good if they’re optimizing for good futures would match the values that the org was set up with and help other funders know to step up.
SFF (&Lightspeed): Great job overall. Funds the majority of the really useful looking stuff imo, s-process is a super neat way to deploy some of the world’s best cognition efficiently. Only major rec is: handle individual funding better. S-process by default is more org-shaped, but a lot of the highest impact per dollar opportunities are smaller scale and SFF isn’t set up super cleanly to deal with this. Having a lower-overhead way to give out small amounts of money to individuals would be a big boost here, but might need some mechanism design to handle it well. Suggest using the EigenTrust system @the gears to ascension is working on to make this work, can write up properly if @jaan is even slightly interested.
(Talking to other people, I did hear some complaints about delayed round announcements, this hasn’t been a huge deal on projects I’ve fundraised with because I just assumed SFF would come through sometime and never particularly watched dates.)
AISTOF: Gj too, esp like high speed/responsiveness/agency, downside is focus on emergency bridge funding rather than ongoing, but fills a real and important niche. This is a really nice example of “rich person empowers one high agency person to do good things”. Suggest more HNWIs do this kind of thing.
LTFF: Nice taste and was for a long time by far the best fund focused on individuals, but historically poor responsiveness and.. something like process? Good grantmakers, but needs more focus/structure/innovation/cohesiveness.
Actually, scratch that, this used to be their problem. Their new problem looks to be more like “they’re not giving away much money, probably because they don’t have much money because OpenPhil defunded them (related to issues with OpenPhil)”
Might be partly due to other issues there? But my bet is on most of this is a defunding issue. This is especially bad because they have a ton of surface area with high impact applications, so lots of the best people will be getting let down after putting work into applying.
Recs: someone very wealthy please donate our LTFF is dying[2]
(LTFF staff please correct me if I’m wrong and you have money but aren’t giving it out for other reasons I’m unaware of)
The Alignment Project: I want to love this! It seems maybe great! But when I dig into the research priorities, they feel like they’re mostly not trying to tackle the hard part of alignment (which is fair, it’s hard, but it would be good to at least try and capture fundamental deconfusion work and ambitious alignment explorations) or likely kinda slip into basically capabilities and/or fun nerd things to explore in a few places. I haven’t actually seen many of their grants, and I put decent odds on them actually being one of the best funders around, but I’m not super sure here.
Recs: Public grants database, adding focus area for deconfusion/agent foundations, adding focus area for something like “using automated researchers to solve strong/non-prosaic alignment in domains where you don’t get straightforward verification”.
Longview: Really low transparency. Idk what they’re doing, there’s no application process, the few examples of grants they flag are broad, vaguely reasonable, but don’t show exceptional taste. Their public info about the cause area is super general and based on broad surveys rather than showing the gears models of the grantmakers themselves, other than flagging “malicious or reckless actor” threats without flagging AI takeover which is concerning as a sign about taste. It’s possible they’re doing good things, but people on the outside can’t tell, and the few signs that are visible are not hope-generating for clear well-grounded models which can evaluate theories of change in a complex domain.
Rec: Please make a database of grants made based on your advice so people can have a sense of what you’re actually doing and know whether to encourage the HNWIs you advise towards you. Also being legible about your process for finding and evaluating leads would be really good.
ARIA: Don’t have super good context here, but seems to be doing a semi-narrow thing reasonably well?
Founder’s Pledge: A few well-chosen governance things, but not that many of them considering the scale of funds that I hear they have.
Rec: adding staff to review technical and scaling up the efforts, if they want to really move the needle on AI x-risk. Or updating grants db if they are still making nice grants.
Schmidt Sciences: Some promising words (e.g. “Advance safety approaches resistant to obsolescence from fast-evolving technology”). Not looked into them much until this post, but looks.. Kinda like it’s aiming to get us further in the WFLL scenario, my guess is they’re kinda limiting themselves to academics with good credentials, which doesn’t have great overlap with people who deeply get the strategic/technical picture in a way which lets them orient to a landscape where you have to do better than science because you can’t empirically test whether an AI design would kill you if it correctly noticed it was strong enough to take over. Still, some people whose project descriptions mean I’d guess they’re net positive.
Rec: Re-evaluate the exclusive focus on academics after building clear models of how to get a win condition, or hiring people with that clarity.
AI Safety Fund: Team has some nice ideas, but org has not got funding yet. TBD.
Delta Fund: afaik similar deal to the above, possible JJ has actually found funding as he’s well connected and has been thinking about this for years.
BlueDot: They have ($1m) funding! But too new to see what their taste on grants is like. Plausibly hopeful list of interests.
Appendix: Why is AI Safety grantmaking hard?
Unlike VC or even most charity work which have feedback loops where you can see how you’re doing, the feedback loops around trying to steer the singularity have a ground truth answer (does the singularity go well) which is not available until it’s too late. This is really rough. The best we have is to ask people who have really good strategic and technical models of the world what they think, and those people are in uncomfortably short supply, plus their time is very valuable (as they’re the people who can often do the most good).
Also, if any donor wants advice and is serious about donating non-trivial amounts, I’m always happy to talk and give you a menu of current+high expected nanodooms averted per dollar opportunities. Similarly, if anyone from any of these funders who is a position to improve things wants to get on a call and get more details of my read of what they could do better, I’d be enthusiastic to give more info.
I maintained the list of funders for several years, have been involved in fundraising for quite a few orgs I volunteer for/advise (total of ~$2m), have advised many people in the sphere, have given out small scale grants a fair few times from personal funds and learned a lot from that, thought about grantmaking in general a bunch for the better part of a decade, etc.
I mostly only wrote about ones I have takes about or recent info on, other than very large ones. There are others, see the list of funders. Why write this? I think common knowledge is broadly good, I want lots of the funders to up their game, I would like everyone to not die and in fact get a nice future.
CoI: I’m not dependent on funding from any of these sources and have not received a salary from any of the grant applications I’ve helped with. However, projects I’ve helped with have received funds and some of my friends are/have been supported by some of these funders (including some ones I’m critical of).
(also if you do get funds maybe get a really good full time ops/systems person or something? Chris Lons who ran AI Safety Quest would be great/is interested in improving the funding landscape and is looking for work)
The way it’s set up doesn’t make clear to applicants just how hard it is to get funded there
Is Manifund overpromising in some way, or is it just that other funders like OP/SFF don’t show you the prospective/unfunded applications? My sense is the bar on getting significant funding on Manifund is not that different than the bar for other AIS funders, with some jaggedness depending on your style of project. I’d argue the homepage sorted by new/closing soon actually does quite a good job of showing what gets funded and the relative difficulty of doing so.
Many of the best regranters seem inactive, and some of the regranter choices are very questionable.
I do agree that our regrantors are less active than I’d like; historically, many of the regrantor grants go out in the last months of the year as the program comes to an end.
On matters of regrantor selection, I do disagree with your sense of taste on eg Leopold and Tamay; it is the case that Manifund is less doom-y/pause-y than you and some other LW-ers are. (But fwiw we’re pretty pluralistic; eg, we helped PauseAI with fiscal sponsorship through last year’s SFF round.) Furthermore, I’d challenge you to evaluate the regrantors by their grants rather than their vibe; I think the one grant Leopold made was pretty good by many lights; and Tamay hasn’t made a grant yet.
We are also open to bringing on other regrantors, and have some budget for this—if you have candidates who you think would do a better job, please do suggest them!
So, kinda, imo, though not too badly. For other funds, you’re going to get evaluated by someone who has the ability to fund you. On manifund, there’s a good chance that none of the people with budgets will evaluate your application, I think? Looking at the page you mention, it’s ~2/3 unfunded (which is better than I was tracking so some update on that criticism) even considering about half of them are only partly funded (and a fair few just a trivial amount). I think if you scroll around you can get a sense of this, but it’s not highlighted or easily available as a statistic (I just had to eyeball it, and will only be getting open apps rather than the more informative closed stats). Probably putting the stats for how many% of requested funding projects tend to get somewhere + listing on the make application page something about how this gets put in front of people with budgets fixes this?
I do agree that our regrantors are less active than I’d like; historically, many of the regrantor grants go out in the last months of the year as the program comes to an end.
Right.. seems suboptimal to have an invisible to applicants spike in probability of getting funded? What do you think of the idea of “request custom LLM selected newsletter” feature and ask regrantors to write a paragraph about the kinds of things they’d like to hear about when they sign up?
I do disagree with your sense of taste on eg Leopold and Tamay
This could get pretty involved re Leopold as it gets a bit into strategic considerations which probably fits best in an interactive setting[1], but Tamay’s lack of giving out funds is not all that reassuring given his terrifyingly bad takes on strategy should be and actions to match. Maybe read through this https://www.mechanize.work/blog/technological-determinism/ and model what AI general enough to automate the entire economy does with humans afterwards, given anything like current levels of civilizational competence at alignment and governance.
We are also open to bringing on other regrantors, and have some budget for this—if you have candidates who you think would do a better job, please do suggest them!
Yeah! Here’s some of the people who first spring to mind as having a strong grasp of the biggest challenges around alignment and afaik aren’t purely focused on their own agenda include+I expect to grant to things I’m at all hopeful about+aren’t already well placed to direct abundant funding already.
I’m also not super impressed with his grants, they don’t seem awful, but not particularly high impact per dollar for some pretty large grants to already well funded people. I’d be curious to see a retrospective on the $400k grant he gave two years ago and see how much came of that.
Awesome! I’ve got a pretty full couple of days, but should have a sketch by sometime on the weekend.
And I say it because of a mix of filling in the application (which is heavy duty in a few ways which kinda makes sense for orgs, but not really for an individual), and the way s-process evaluations don’t neatly fit checking a dozens of additional applications which require lots of technical reading to assess. You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders, like many other funders use, rather than assessment via recommenders who have scarce time.
(you have much more vision into s-process than me, I’ve been keen to get a better sense of it for a couple years and if there’s sharable docs/screenshots of the software I’d be happy to become better informed and less likely to have my suggestions miss)
which is heavy duty in a few ways which kinda makes sense for orgs, but not really for an individual
This makes sense, though it’s certainly possible to get funded as an individual. Based on my quick count there were ~four individuals funded this round.
You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders
Speculation grants basically match this description. One possible difference is that there’s an incentive for speculation granters to predict what recommenders in the round will favor (though they make speculation grants without knowing who’s going to participate as a recommender). I’m curious for your take.
I’ve been keen to get a better sense of it
It’s hard to get a good sense without seeing it populated with data, but I can’t share real data (and I haven’t yet created good fake data). I’ll try my best to give an overview though.
Recommenders start by inputting two pieces of data: (a) how interested they are in investigating each proposal, and (b) disclosures of any any possible conflicts of interest, so that other recommenders can vote on whether they should be recused or not.
They spend most of the round using this interface, where they can input marginal value function curves for the different orgs. They can also click on an org to see info about it (all of the info from the application form, which in my example is empty) and notes (both their notes and other recommenders’).
The MVF graph says how much they believe any given dollar is worth. We force curves to be non-increasing, so marginal value never goes up. On my graph you can see the shaded area visualizing how much money is allocated to the different proposals as we sweep down the graph from the highest marginal value.
There are also various comparison tools, the most popular of which is the Sankey chart which shows how much money flows from the funder, through the different recommenders, to different orgs. The disagreements matrix is one of the more useful tools: it shows areas where recommenders disagree the most, which helps them figure out what to talk about in their meetings.
If you’re interested in the algorithm more than the app, I have a draft attempting to explain it.
You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders
Speculation grants basically match this description. One possible difference is that there’s an incentive for speculation granters to predict what recommenders in the round will favor (though they make speculation grants without knowing who’s going to participate as a recommender). I’m curious for your take.
Speculation grants were a great addition! However, applying for a speculation grant still commits the S-process to doing a full evaluation, along with the heavy application process on the user side. I think this can be streamlined a fair amount without losing quality of evaluation, draft of proposal started :)
Thanks for all the extra info on the s-process, this helps clarify my thinking!
I’m a grantmaker at Longview. I agree there isn’t great public evidence that we’re doing useful work. I’d be happy to share a lot more information about our work with people who are strongly considering donating >$100K to AI safety or closely advising people who might do that.
Thanks for engaging! What are the disadvantages of having information about your recommendations generally available? There might be some which are sensitive, but most will be harmless, and having more eyes seems beneficial. Both from getting more cognition to help notice things, and, more importantly, for people who might end up getting to advise donors but are not yet.
My guess is a fair few people have forwarded you HNWIs without having much read on the object level grantmaking suggestions you tend to give (and some of those people would not have known in advance that they would get to advise the individual, so your current policy couldn’t help), and that feels.. unhealthy, for something like epistemic virtue reasons.
Also, I bet people would be able to give higher quality (and therefore more+more successful) recommendations to HNWIs to talk with you if they had grounded evidence of what grants you suggest.
An Opinionated Review of Many AI Safety Funders (with recommendations)[1]
Manifund: Somewhat mixed. Nice interface, love the team, but many of the best regranters seem inactive, some of the regranter choices are very questionable, and the way it’s set up doesn’t make clear to applicants that a person with a budget might not evaluate them, or give stats on how hard it is to get funded there. The quality of choices of regranters seems extremely variable. You’ve got some ones with imo exceptional technical models, like Larsen, Zvi, and Richard Ngo, but they don’t seem super active in terms of number of donations. A bunch of very reasonable picks, some pretty active, like Ryan Kidd. And some honestly very confusing / bad picks, like the guy who quit epoch to make a capabilities company which if it succeeds will clearly be bad for the world (it’s literally just saying “may as well automate the economy, it’s gonna happen anyway, if you can’t beat ‘em join ’em”, while totally failing to grasp or possibly not caring about this leading to humanity having a very very bad time), and has a lot of very bad takes (in my opinion and the opinions of a lot of people who seem to be thinking clearly). Leopold Aschenbrenner, who’s high profile, but I’ve seen many people comment is likely making the world a lot worse by building an unhelpful kind of hype and pushing the world towards race dynamics, and who has now pivoted to running an AGI investment fund.
Anyway yeah, rec: feels like the selection process is too focused on “high profile” and not enough on gears models of why this person is actually good for x-risk. Also some lack of keeping the really good people engaged? Maybe you could give regranters a way to get a digest emailed to them of things relevant to their interests? Like they input a prompt and it acts as a filter, giving them a few things to check weekly? Rather than the current system where you need to reach out directly to them to get them to evaluate anything.
OpenPhil: On the bright side, really nice transparency along with an amazing volume of funds which does boost some great people, especially in meta/fieldbuilding and the career transition grants I’ve seen some hits. They also have far more money than anyone else, but overall it seems relatively inefficiently spent. Lots of emphasis on evals and interp and academics who are doing work which can’t possibly scale to strong superintelligence, and often don’t seem to have theories of change (in the original Aaron Schwartz sense of working back from a win condition and figuring out how to get there, where it’s juxtaposed with theories of action, which are locally reasonable-looking next steps which seem like they might go towards your goal) which are aiming at win conditions. Over-reliant on legible signals of worthiness, missing inside view+willingness to pick weird but actually highly impactful stuff, and lots of people on the ground being very confused and concerned by their funding priorities. I briefly reviewed all their grants over the past ~5 years about a year ago, it used to be a lot better before Holden left. Whatever happened that led him to leave, I wish hadn’t happened and I hope that Dustin decides to do whatever it takes to get him back. I think it still probably does more good than harm (unlike Habryka’s understandable take), but very mixed and definitely huge room for improvement.
Main rec is to fund actual alignment work (as stated clearly by Rohin Shah), and hire some of the very rare people who can evaluate conceptual research on technical merit not general signals like credentials. Bonus would be to publish lists of things that OpenPhil grantmakers think would be good EV to fund but they don’t because Dustin doesn’t want to fund them (e.g. in domains where he’s vetoed like governance orgs willing to work with the right, or rationalist community building). It’s very fair for him to not want to fund things he doesn’t want to defend, but making it clear and public what his grantmakers think is actually good if they’re optimizing for good futures would match the values that the org was set up with and help other funders know to step up.
SFF (&Lightspeed): Great job overall. Funds the majority of the really useful looking stuff imo, s-process is a super neat way to deploy some of the world’s best cognition efficiently. Only major rec is: handle individual funding better. S-process by default is more org-shaped, but a lot of the highest impact per dollar opportunities are smaller scale and SFF isn’t set up super cleanly to deal with this. Having a lower-overhead way to give out small amounts of money to individuals would be a big boost here, but might need some mechanism design to handle it well. Suggest using the EigenTrust system @the gears to ascension is working on to make this work, can write up properly if @jaan is even slightly interested.
(Talking to other people, I did hear some complaints about delayed round announcements, this hasn’t been a huge deal on projects I’ve fundraised with because I just assumed SFF would come through sometime and never particularly watched dates.)
AISTOF: Gj too, esp like high speed/responsiveness/agency, downside is focus on emergency bridge funding rather than ongoing, but fills a real and important niche. This is a really nice example of “rich person empowers one high agency person to do good things”. Suggest more HNWIs do this kind of thing.
LTFF: Nice taste and was for a long time by far the best fund focused on individuals, but historically poor responsiveness and.. something like process? Good grantmakers, but needs more focus/structure/innovation/cohesiveness.
Actually, scratch that, this used to be their problem. Their new problem looks to be more like “they’re not giving away much money, probably because they don’t have much money because OpenPhil defunded them (related to issues with OpenPhil)”
Might be partly due to other issues there? But my bet is on most of this is a defunding issue. This is especially bad because they have a ton of surface area with high impact applications, so lots of the best people will be getting let down after putting work into applying.
Recs: someone very wealthy please donate our LTFF is dying[2]
(LTFF staff please correct me if I’m wrong and you have money but aren’t giving it out for other reasons I’m unaware of)
The Alignment Project: I want to love this! It seems maybe great! But when I dig into the research priorities, they feel like they’re mostly not trying to tackle the hard part of alignment (which is fair, it’s hard, but it would be good to at least try and capture fundamental deconfusion work and ambitious alignment explorations) or likely kinda slip into basically capabilities and/or fun nerd things to explore in a few places. I haven’t actually seen many of their grants, and I put decent odds on them actually being one of the best funders around, but I’m not super sure here.
Recs: Public grants database, adding focus area for deconfusion/agent foundations, adding focus area for something like “using automated researchers to solve strong/non-prosaic alignment in domains where you don’t get straightforward verification”.
Longview: Really low transparency. Idk what they’re doing, there’s no application process, the few examples of grants they flag are broad, vaguely reasonable, but don’t show exceptional taste. Their public info about the cause area is super general and based on broad surveys rather than showing the gears models of the grantmakers themselves, other than flagging “malicious or reckless actor” threats without flagging AI takeover which is concerning as a sign about taste. It’s possible they’re doing good things, but people on the outside can’t tell, and the few signs that are visible are not hope-generating for clear well-grounded models which can evaluate theories of change in a complex domain.
Rec: Please make a database of grants made based on your advice so people can have a sense of what you’re actually doing and know whether to encourage the HNWIs you advise towards you. Also being legible about your process for finding and evaluating leads would be really good.
ARIA: Don’t have super good context here, but seems to be doing a semi-narrow thing reasonably well?
Founder’s Pledge: A few well-chosen governance things, but not that many of them considering the scale of funds that I hear they have.
Rec: adding staff to review technical and scaling up the efforts, if they want to really move the needle on AI x-risk. Or updating grants db if they are still making nice grants.
Schmidt Sciences: Some promising words (e.g. “Advance safety approaches resistant to obsolescence from fast-evolving technology”). Not looked into them much until this post, but looks.. Kinda like it’s aiming to get us further in the WFLL scenario, my guess is they’re kinda limiting themselves to academics with good credentials, which doesn’t have great overlap with people who deeply get the strategic/technical picture in a way which lets them orient to a landscape where you have to do better than science because you can’t empirically test whether an AI design would kill you if it correctly noticed it was strong enough to take over. Still, some people whose project descriptions mean I’d guess they’re net positive.
Rec: Re-evaluate the exclusive focus on academics after building clear models of how to get a win condition, or hiring people with that clarity.
AI Safety Fund: Team has some nice ideas, but org has not got funding yet. TBD.
Delta Fund: afaik similar deal to the above, possible JJ has actually found funding as he’s well connected and has been thinking about this for years.
BlueDot: They have ($1m) funding! But too new to see what their taste on grants is like. Plausibly hopeful list of interests.
Appendix: Why is AI Safety grantmaking hard?
Unlike VC or even most charity work which have feedback loops where you can see how you’re doing, the feedback loops around trying to steer the singularity have a ground truth answer (does the singularity go well) which is not available until it’s too late. This is really rough. The best we have is to ask people who have really good strategic and technical models of the world what they think, and those people are in uncomfortably short supply, plus their time is very valuable (as they’re the people who can often do the most good).
Also, if any donor wants advice and is serious about donating non-trivial amounts, I’m always happy to talk and give you a menu of current+high expected nanodooms averted per dollar opportunities. Similarly, if anyone from any of these funders who is a position to improve things wants to get on a call and get more details of my read of what they could do better, I’d be enthusiastic to give more info.
“who are you to write this”
I maintained the list of funders for several years, have been involved in fundraising for quite a few orgs I volunteer for/advise (total of ~$2m), have advised many people in the sphere, have given out small scale grants a fair few times from personal funds and learned a lot from that, thought about grantmaking in general a bunch for the better part of a decade, etc.
I mostly only wrote about ones I have takes about or recent info on, other than very large ones. There are others, see the list of funders. Why write this? I think common knowledge is broadly good, I want lots of the funders to up their game, I would like everyone to not die and in fact get a nice future.
CoI: I’m not dependent on funding from any of these sources and have not received a salary from any of the grant applications I’ve helped with. However, projects I’ve helped with have received funds and some of my friends are/have been supported by some of these funders (including some ones I’m critical of).
(also if you do get funds maybe get a really good full time ops/systems person or something? Chris Lons who ran AI Safety Quest would be great/is interested in improving the funding landscape and is looking for work)
Thanks for writing this! Just booked a time in your calendly to discuss at more length.
curious for more details from the people who disagree-reacted, since there’s a lot going on here.
I’m confused—I don’t see any? I certainly have some details of arguable value though.
it was at negative agreement earlier
Thanks for the review! Speaking on Manifund:
Is Manifund overpromising in some way, or is it just that other funders like OP/SFF don’t show you the prospective/unfunded applications? My sense is the bar on getting significant funding on Manifund is not that different than the bar for other AIS funders, with some jaggedness depending on your style of project. I’d argue the homepage sorted by new/closing soon actually does quite a good job of showing what gets funded and the relative difficulty of doing so.
I do agree that our regrantors are less active than I’d like; historically, many of the regrantor grants go out in the last months of the year as the program comes to an end.
On matters of regrantor selection, I do disagree with your sense of taste on eg Leopold and Tamay; it is the case that Manifund is less doom-y/pause-y than you and some other LW-ers are. (But fwiw we’re pretty pluralistic; eg, we helped PauseAI with fiscal sponsorship through last year’s SFF round.) Furthermore, I’d challenge you to evaluate the regrantors by their grants rather than their vibe; I think the one grant Leopold made was pretty good by many lights; and Tamay hasn’t made a grant yet.
We are also open to bringing on other regrantors, and have some budget for this—if you have candidates who you think would do a better job, please do suggest them!
Thanks for engaging!
So, kinda, imo, though not too badly. For other funds, you’re going to get evaluated by someone who has the ability to fund you. On manifund, there’s a good chance that none of the people with budgets will evaluate your application, I think? Looking at the page you mention, it’s ~2/3 unfunded (which is better than I was tracking so some update on that criticism) even considering about half of them are only partly funded (and a fair few just a trivial amount). I think if you scroll around you can get a sense of this, but it’s not highlighted or easily available as a statistic (I just had to eyeball it, and will only be getting open apps rather than the more informative closed stats). Probably putting the stats for how many% of requested funding projects tend to get somewhere + listing on the make application page something about how this gets put in front of people with budgets fixes this?
Right.. seems suboptimal to have an invisible to applicants spike in probability of getting funded? What do you think of the idea of “request custom LLM selected newsletter” feature and ask regrantors to write a paragraph about the kinds of things they’d like to hear about when they sign up?
This could get pretty involved re Leopold as it gets a bit into strategic considerations which probably fits best in an interactive setting[1], but Tamay’s lack of giving out funds is not all that reassuring given his terrifyingly bad takes on strategy should be and actions to match. Maybe read through this https://www.mechanize.work/blog/technological-determinism/ and model what AI general enough to automate the entire economy does with humans afterwards, given anything like current levels of civilizational competence at alignment and governance.
Edit: to show this is a widely held view, note that one of the most highly rated short forms recently was this https://www.lesswrong.com/posts/DFX9MzRjsnvRFLTvt/jan_kulveit-s-shortform?commentId=vsu3RzANmkPwDyw7Z which calls them out very harshly, with agreement from commentors.
Yeah! Here’s some of the people who first spring to mind as having a strong grasp of the biggest challenges around alignment and afaik aren’t purely focused on their own agenda include+I expect to grant to things I’m at all hopeful about+aren’t already well placed to direct abundant funding already.
@Steven Byrnes, Sam Eisenstat, @Vivek Hebbar, @Ramana Kumar, @johnswentworth @Richard Ngo, @Vladimir_Nesov, @steven0461.
Happy to have a call if that sounds fun to you!
I’m also not super impressed with his grants, they don’t seem awful, but not particularly high impact per dollar for some pretty large grants to already well funded people. I’d be curious to see a retrospective on the $400k grant he gave two years ago and see how much came of that.
Also, updated my description to prioritise better given updates about funding stats from looking better more recently.
Do you say this because of the overhead of filling out the application?
I’m interested in hearing about it. Doesn’t have to be that polished, just enough to get the idea.
(For context, I work on the S-Process)
Awesome! I’ve got a pretty full couple of days, but should have a sketch by sometime on the weekend.
And I say it because of a mix of filling in the application (which is heavy duty in a few ways which kinda makes sense for orgs, but not really for an individual), and the way s-process evaluations don’t neatly fit checking a dozens of additional applications which require lots of technical reading to assess. You kinda want a way to scalably use existing takes on research value by people you trust somewhat but aren’t full recommenders, like many other funders use, rather than assessment via recommenders who have scarce time.
(you have much more vision into s-process than me, I’ve been keen to get a better sense of it for a couple years and if there’s sharable docs/screenshots of the software I’d be happy to become better informed and less likely to have my suggestions miss)
This makes sense, though it’s certainly possible to get funded as an individual. Based on my quick count there were ~four individuals funded this round.
Speculation grants basically match this description. One possible difference is that there’s an incentive for speculation granters to predict what recommenders in the round will favor (though they make speculation grants without knowing who’s going to participate as a recommender). I’m curious for your take.
It’s hard to get a good sense without seeing it populated with data, but I can’t share real data (and I haven’t yet created good fake data). I’ll try my best to give an overview though.
Recommenders start by inputting two pieces of data: (a) how interested they are in investigating each proposal, and (b) disclosures of any any possible conflicts of interest, so that other recommenders can vote on whether they should be recused or not.
They spend most of the round using this interface, where they can input marginal value function curves for the different orgs. They can also click on an org to see info about it (all of the info from the application form, which in my example is empty) and notes (both their notes and other recommenders’).
The MVF graph says how much they believe any given dollar is worth. We force curves to be non-increasing, so marginal value never goes up. On my graph you can see the shaded area visualizing how much money is allocated to the different proposals as we sweep down the graph from the highest marginal value.
There are also various comparison tools, the most popular of which is the Sankey chart which shows how much money flows from the funder, through the different recommenders, to different orgs. The disagreements matrix is one of the more useful tools: it shows areas where recommenders disagree the most, which helps them figure out what to talk about in their meetings.
If you’re interested in the algorithm more than the app, I have a draft attempting to explain it.
Speculation grants were a great addition! However, applying for a speculation grant still commits the S-process to doing a full evaluation, along with the heavy application process on the user side. I think this can be streamlined a fair amount without losing quality of evaluation, draft of proposal started :)
Thanks for all the extra info on the s-process, this helps clarify my thinking!
I’m a grantmaker at Longview. I agree there isn’t great public evidence that we’re doing useful work. I’d be happy to share a lot more information about our work with people who are strongly considering donating >$100K to AI safety or closely advising people who might do that.
Thanks for engaging! What are the disadvantages of having information about your recommendations generally available? There might be some which are sensitive, but most will be harmless, and having more eyes seems beneficial. Both from getting more cognition to help notice things, and, more importantly, for people who might end up getting to advise donors but are not yet.
My guess is a fair few people have forwarded you HNWIs without having much read on the object level grantmaking suggestions you tend to give (and some of those people would not have known in advance that they would get to advise the individual, so your current policy couldn’t help), and that feels.. unhealthy, for something like epistemic virtue reasons.
Also, I bet people would be able to give higher quality (and therefore more+more successful) recommendations to HNWIs to talk with you if they had grounded evidence of what grants you suggest.