Hey there~ I’m Austin, currently building https://manifund.org. Always happy to meet LessWrong people; reach out at akrolsmir@gmail.com!
Austin Chen
Thanks for the review! Speaking on Manifund:
The way it’s set up doesn’t make clear to applicants just how hard it is to get funded there
Is Manifund overpromising in some way, or is it just that other funders like OP/SFF don’t show you the prospective/unfunded applications? My sense is the bar on getting significant funding on Manifund is not that different than the bar for other AIS funders, with some jaggedness depending on your style of project. I’d argue the homepage sorted by new/closing soon actually does quite a good job of showing what gets funded and the relative difficulty of doing so.
Many of the best regranters seem inactive, and some of the regranter choices are very questionable.
I do agree that our regrantors are less active than I’d like; historically, many of the regrantor grants go out in the last months of the year as the program comes to an end.
On matters of regrantor selection, I do disagree with your sense of taste on eg Leopold and Tamay; it is the case that Manifund is less doom-y/pause-y than you and some other LW-ers are. (But fwiw we’re pretty pluralistic; eg, we helped PauseAI with fiscal sponsorship through last year’s SFF round.) Furthermore, I’d challenge you to evaluate the regrantors by their grants rather than their vibe; I think the one grant Leopold made was pretty good by many lights; and Tamay hasn’t made a grant yet.
We are also open to bringing on other regrantors, and have some budget for this—if you have candidates who you think would do a better job, please do suggest them!
This looks awesome, congrats on announcing this! I would be extremely tempted myself were it not for a bunch of other likely obligations. Approximately how large do you expect this fellowship to be?
Also, structuring Inkhaven as a paid program was interesting; most fellowships (eg Asterisk, FLF, MATS) instead pay their participants. I wonder if this introduces minor adverse selection, in that only writers who are otherwise financially stable can afford to participate. Famously, startup incubators that charge (like OnDeck) are much worse than incubators that pay for equity (like YC or hf0).
I imagine you’ve thought about this a lot already, and you do offer need-based scholarships which is great; also things like LessOnline and Manifest have proven some amount of success for charging for events. But maybe there’s some other way of finding sponsors or funders for these writers? For example, I think Manifund would be quite happy to sponsor 1-3 “full rides” at $5k+ each, for a few bloggers who are interested in topics like AI safety funding, impact evaluations, and new opportunities, which we could occasionally crosspost to the Manifund newsletter. And I imagine other orgs like GGI might be too!
I agree with the paper that paying here probably has minimal effects on devs, but also even if it does have an effect it doesn’t seem likely to change the results, unless somehow the AI group was more more incentivized to be slow than the non AI group.
Minor point of clarity: I briefly attended a talk/debate where Nate Soares and Scott Aaronson (not Sumner) was discussing these topics. Are we thinking of the same event, or was there a separate conversation with Nate Soares and Scott Sumner?
If you’re looking to do an event in San Francisco, lmk, we’d love to host one at Mox!
Thanks Ozzie—we didn’t invest that much effort into badges this year but I totally agree there’s an opportunity to do something better. Organizer-wise it can be hard to line up all the required info before printing, but having a few sections where people can sharpie things in or pick stickers, seems like low hanging fruit.
This could also extend beyond badges—for example, one could pick different colored swag t-shirts to signal eg (academia vs lab vs funder) at a conference.
I’ll also send this to Rachel for the Curve, which I expect she might enjoy this as a visual and event design challenge.
Huh, seems pretty cool and big-if-true. Is there a specific reason you’re posting this now? Eg asking people for feedback on the plan? Seeking additional funders for your $25m Series A?
My guess btw is that some donors like Michael have money parked in a DAF, and thus require a c3 sponsor like Manifund to facilitate that donation—until your own c3 status arrives, ofc.
(If that continues to get held up. but you receive an important c3 donation commitment in the meantime, let us know and we might be able to help—I think it’s possible to recharacterize same year donations after c3 status arrives, which could unblock the c4 donation cap?)
From the Manifund side: we hadn’t spoken with CAIP previously but we’re generally happy to facilitate grants to them, either for their specific project or as general support.
A complicating factor is that, like many 501c3s, we have a limited budget to be able to send towards c4s, eg I’m not sure if we could support their maximum ask of $400k on Manifund. I do feel happy to commit at least $50k of our “c4 budget” (which is their min ask) if they do raise that much through Manifund; beyond that, we should chat!
Kevin Roose
Thanks to Elizabeth for hosting me! I really enjoyed this conversation; “winning” is a concept that seems important and undervalued among rationalists, and I’m glad to have had the time to throw ideas around here.
I do feel like this podcast focused a bit more on some of the weirder or more controversial choices I made, which is totally fine; but if I were properly stating the case for “what is important about winning” from scratch, I’d instead pull examples like how YCombinator won, or how EA has been winning relative to rationality in recruiting smart young folks. AppliedDivinityStudies’s “where are all the successful rationalists” is also great.
Very happy to answer questions ofc!
Thanks for the feedback! I think the nature of a hackathon is that everyone is trying to get something that works at all, and “works well” is just a pipe dream haha. IIRC, there was some interest in incorporating this feature directly into Elicit, which would be pretty exciting.
Anyways I’ll try to pass your feedback to Panda and Charlie, but you might also enjoy seeing their source code here and submitting a Github issue or pull request: https://github.com/CG80499/paper-retraction-detection
Oh cool! Nice demo and happy to see it’s shipped and live, though I’d say the results were a bit disappointing on my very first prompt:
(if that’s not the kind of question you’re looking for, then I might suggest putting in some default example prompts to help the user understand what questions this is good for surfacing!)
Thanks! Appreciate the feedback for if we do a future hackathon or similar event~
Thanks, appreciate the thanks!
Strong upvoted—I don’t have much to add, but I really appreciated the concrete examples from what appears to be lived experience.
This company now exists! Brighter is currently doing a presale, for a floor lamp emitting 50k lumens, adjustable between 1800K-6500K: https://www.indiegogo.com/projects/brighter-the-world-s-brightest-floor-lamp#/. I expect it’s more aesthetic and turnkey compared to DIY lumenator options, but probably somewhat more expensive (MSRP is $1499, with early bird/package discounts down to $899).
Disclaimer, I’m an investor; I’ve seen early prototypes but have not purchased one myself yet.
I think credit allocation is extremely important to study and get right, because it tells you who to trust, who to grant resources to. For example, I think much of the wealth of modern society is downstream of sensible credit allocation between laborers, funders, and corporations in the form of equity and debt, allowing successful entrepreneurs and investors to have more funding to reinvest into good ideas. Another (non-monetary) example is authorship in scientific papers; there, correct credit allocation helps people in the field understand which researchers are worth paying attention to, whose studies ought to be funded, etc. As any mechanism designer can tell you, these systems are far from perfect, but I think still much much better than the default in the nonprofit world.
(I do agree that caringness is often a bigger bottleneck than funding, for many classes of important problems, such as trying to hire someone into a field)
Makes sense, thanks.
FWIW, I really appreciated that y’all posted this writeup about mentor selection—choosing folks for impactful, visible, prestigious positions is a whole can of worms, and I’m glad to have more public posts explaining your process & reasoning.
I think this is broadly correct. My sense is that funders in the space are starting to think about what to do in light of Anthropic dollars, but not a lot of concrete things have started happening yet.
Beyond other e2g folks starting to donate more now, I think other things that start to make sense include:
Fieldbuilding now (eg new orgs, incubators, fellowships, recruiting, outreach) so that the funds have good opportunities to go to
Designing funding institutions that scale to handle 10x to 100x the number dollars, and also the number of “principals” (since I expect, as opposed to OP having a single Dustin, Anthropic will produce something like 50-100 folks with 10Ms-100Ms to donate)
Revisiting ideas from the FTX Future Fund era: prizes, for-profit norms, ambitious scaleable uses of funds, moonshots