Hey there~ I’m Austin, currently building https://manifund.org. Always happy to meet LessWrong people; reach out at akrolsmir@gmail.com!
Austin Chen
(thanks)
*one thoughtful trader with a large bankroll to outweigh a bunch of low signal traders.
(granted, at the tails, it does become more expensive to further lower the odds. Eg at 5%, you’re paying $19 per $1 downwards)
I think this is overblown and mostly highlights the difficulty of getting prediction markets to proper % at the tails, due to opportunity cost of money.
On the subject of participation, a sometimes underappreciated fact is that prediction markets allow much deeper participation in the form of voting with more dollars. Even if most most participants are naive, it only takes one thoughtful
(granted, at the
Thanks for continually writing about these kinds of political opportunities, in public!
Some people & places you may want to reach out to to float this article (and your volunteer signup form), chosen for having some context on NYC specifically:
Zvi Mowshowitz
Jessie of collider.nyc
@Screwtape / Skyler who organizes the east coast megameetup
Overcoming Bias/ACX/Rationality NYC (maybe, @Robi Rahman?)
(happy to intro, and apologies if these are obvious!)
Thanks for writing this! Some reasons I would steelman continued funding towards tetlockian or PM-style forecasting:
Source and screen for talent. There sure is some correlation between forecasting well and doing important things in EA. Just picking some people I know: Joel Becker, another former #1 Manifold trader, went on to join METR and then do their famous uplift studies. Eli Lifland went on to help make AI 2027. Peter Wildeford started Rethink Priorities, now IAPS. And some of your own track record in making good early-stage EA grants is here.
Beyond that, a bunch of smart and interesting people have expressed a lot of interest in forecasting, from banner bearers like Scott Alexander and Vitalik Buterin and Robin Hanson, to surprising cases like Anthony Giovanetti (of Slay the Spire) to <anon famous AI researcher who DM’d me> to Sam Altman. I do think there’s some amount of intellectual fashion-ism going on here, but also, you should fish where the fish are.
When funding is abundant, one bottleneck becomes finding (and building consensus around) talent; if the only thing that a bunch of money spent on forecasting does is to identify good people, that may be worth it.Fast, accurate info in times of chaos. Prediction markets are actually quite good at distilling through noise during times of high uncertainty, eg recently around russia/ukraine war and the iran war. Manifold’s usage numbers spike every time there’s a crisis. Because PMs pay a high premium towards being speedy, they’re often the fastest trustworthy source of data. If the world becomes more chaotic due to faster tech growth, it may be quite valuable to have this place to stay up to date.
New unlocks from growth in AI capabilities. Historically, one very expensive input into forecasting is forecaster time. As LLMs catch up to top human forecasters, it’ll soon be cheap to get a calibrated answer to any question one might ask. On priors, this should help us with making better decisions or making futarchy possible. I agree this is somewhat speculative still, and wish more people were trying things in this space.
I included this story as a short anecdote about Marcus’s ability to spot talent, make active investments, and convince founders to take the leap, all of which I expect to transfer into helping start great AI x Animal orgs. I understand that different people in EA/AI safety have different takes about whether Mechanize specifically is good or bad—I happen to think good or at least neutral.
(And I take responsibility for any factual errors with this specific anecdote. Talking to Marcus just now, it seems like his main nudge was to convince Ege/Matthew/Tamay that the nonprofit structure was wrong for what they wanted to accomplish.)
the tension between job, career, and the actual work you think is important
- I spend a lot of my time thinking about what my soon-to-be patrons might want to fund.- Recently this has been Anthropic employees, which is weird and stressful in a bunch of ways (“Anthropic employees” are not a single coalition; many are friends and asking them for money is icky; all are busy, and already being swarmed by other people seeking their money, and therefore defensive).
- But historically it’s also been various potential funders, maybe OP/CG as the biggest of them. Which, on reflection, feels a bit insane given that OP/CG have never actually funded any of my work (and mostly haven’t funded the people funding my work!)
- I also think I have a pretty good track record of just, doing the thing and believing that money will come. We shipped Manifold v0 before we got the first grant; Manifest and Mox were internally funded first.
- I’m really tempted to give advice like “do great work and the money will follow”, and it’s kind of true, but also maybe generates a lot of bycatch?
- Probably my biggest patron by total $ was FTX Future Fund. That’s probably part of why I’m so defensive of them, even now. Maybe, half of Manifund is just keeping the spirit of the Future Fund alive.
- One way to avoid the downsides of patronage is to go direct, make your own money. Substack is the classic example. (But then, paying subscribers on Substack or the general internet landscape has its own set of preferences and downsides)
As a patron myself?
- on a small scale, I do enjoy funding stuff (mostly weird software or meta projects)
- and institutionally, Manifund sponsoring Inkhaven is an (indirect) way of supporting the arts. (supporting the supporters of the arts, I guess)
- beyond money, there are other ways to support creators, which a lot of my recent career has been about. For example, building tools for them (Manifold), events (Manifest), and operational support (ACX Grants).
Patronage vs other funding mechanisms
- I get the feeling patronage is kinda “cool”. Emergent Ventures was cool, Erik Hoel has a writeup on it, it was cool when Gwern got $100k from some startup founder.
- Maybe like 2021-2023ish there was a lot more “no strings attached microgrants are awesome” discourse. Beyond FTX stuff this was ACX Grants, Francisco San, Moth Fund, AI Grants. It feels somewhat out of fashion now.
Funding writing, specifically: If you have money (and maybe, a lot of money), how should you cause good writing and art to exist?
- Most directly, you can do the kind of patronage Jenn talks about, give money directly to good writers. Somehow this seems a lot rarer than it ought to be? Is there a missing product or norm here?
- You can just commission pieces. (Many times, I’ve tried to hire people to write for the Manifold or Manifund substack. Mysteriously this hasn’t worked out that well eg, no essay that hit top of Hacker News).- Another approach is to farm writers. Get a bunch of people writing at once, and see the winners. Which is the Inkhaven or other residency/fellowship/batch approach.
- Another is to start a journal or publishing outfit. Stripe Press, Asterisk Mag, Asimov Press.
- Orgs can also just sponsor fulltime writers. Sometimes this is a “fellow”, loosely affiliated. (Anthropic/OpenAI writing fellowship when?)
- Some thinktanks are just orgs that just write, Forethought, Institute for Progress feels like this.
- Unfortunately a lot of great writing is locked in the heads of people who have extremely high opportunity costs. It seems like OpenPhil basically just paid Joe Carlsmith to write stuff for a while, which seemed great. Also somehow Holden Karnofsky stopped running OP to write stuff for a while, which also seemed great. I’m always happy when Oli Habryka takes a break from running Lightcone to drop some new essays. (see also https://aarongertler.com/too-good/.)
- And essay competitions are a thing, ofc.
Essay competitions:
- Writing has the nice property of being cheap to assess, which is possibly why competitions are a reasonable structure
- EA is somehow: blessed with an abundance of great writers, and also really into prizes/competitions, and also has a terrible track record of the competitions working well. IMO, the Blog Building fellowship fell apart for ??? reasons, Cause Exploration Prizes didn’t turn out anything good, the best EA Fiction Contest submission was written by a judge. The best EA Red Teaming entries were not really submitted for the competition, iirc they were Scott’s and Holden’s.
- (though, it sure seems like I’m just anchored to my existing favorite writers)
- Lesswrong does a cool yearly “best of” retrospective voting/judging thing of writing from 2 years ago? Maybe that’s the true good cadence to judge things on.this braindump brought to you by the MANIFUND ESSAY PRIZE https://manifund.org/essay, submit by Fri Apr 24!
Hey Phillip! I wanted to say that I quite enjoyed reading this, as a fellow Catholic and fairly neurotypical (I think??) person. Having been around this scene for a few years now, it’s fun to find out what strikes newcomers as noteworthy; I’m excited for the rest of your Inkhaven writings!
Calibration City https://calibration.city/ is also a great resource for looking into questions about calibration of various prediction market platforms!
For Mox events, our rule of thumb is that attendance is 100% of Partiful Goings, or 50% of Luma RSVPs. Obviously a protest/march may have different dynamics, but this method would forecast ~120 participants.
Dumb question, why do this on a weekend instead of a weekday? I imagine a lot more employees show up on weekdays (though maybe protestors are more available on weekends and maximizing crowd size is important?)
One additional reason that capacity-building for AI safety seems good right now, is that very soon I expect there to be a lot more funding available for AI safety work, from Anthropic donors (see Front-Load Giving Because of Anthropic Donors? and this comment of mine) as well as a broader societal wakeup about risks from AI.
When money becomes more available, the bottleneck becomes “good opportunities/people to spend money on”, which is what capacity-building produces. Also starting asap seems important—capacity-building takes time to set up and bear fruit, and some kinds of capacity building have snowball-y effects (eg MATS).
Curious whether y’all considered Tiptap as a base for the editor, and if so why you decided against it?
(Tiptap is what we use for Manifold/Manifund and there are definitely some warts—eg markdown not being supported out of the box, though recently I think that’s changed—but mostly I’ve liked it.)
I like this vision! “LLM-native Manifold” seems like an obvious thing to explore, but I don’t know of anyone doing this. Some other thoughts:
Manifold-like systems seems like a nice way of running an tournament/evolutionary algorithm to find the best model & scaffold for forecasting. One of the historical arguments for prediction markets is that they help surface human talent (by allowing smart forecasters to have win money/influence in society), and you could view “conjuring the right scaffold” as the modern equivalent
There’s a few distinct intellectual tasks in organizing a prediction market, which could be supercharged with LLMs: forecasting/trading (which has been explored the most), question creation, and question resolution. I suspect LLM assistance on the latter two could help with making PMs actually useful with decisionmaking.
Within forecasting/trading, a lot of effort has gone into something like “make better brier/loss score” (eg see forecastbench), but I think a lot of the nuance in trading well involves being good at market selection and bet sizing, aka knowing where your edge is and how confident to be in it. I’d like to see more people trying to incorporate this into bots, and see bet sizing matter more in bots
LLMs seem like our best bet of getting futarchy to work; with cheap intelligence there are a lot more things we could try here
I think this is broadly correct. My sense is that funders in the space are starting to think about what to do in light of Anthropic dollars, but not a lot of concrete things have started happening yet.
Beyond other e2g folks starting to donate more now, I think other things that start to make sense include:
Fieldbuilding now (eg new orgs, incubators, fellowships, recruiting, outreach) so that the funds have good opportunities to go to
Designing funding institutions that scale to handle 10x to 100x the number dollars, and also the number of “principals” (since I expect, as opposed to OP having a single Dustin, Anthropic will produce something like 50-100 folks with 10Ms-100Ms to donate)
Revisiting ideas from the FTX Future Fund era: prizes, for-profit norms, ambitious scaleable uses of funds, moonshots
Thanks for the review! Speaking on Manifund:
The way it’s set up doesn’t make clear to applicants just how hard it is to get funded there
Is Manifund overpromising in some way, or is it just that other funders like OP/SFF don’t show you the prospective/unfunded applications? My sense is the bar on getting significant funding on Manifund is not that different than the bar for other AIS funders, with some jaggedness depending on your style of project. I’d argue the homepage sorted by new/closing soon actually does quite a good job of showing what gets funded and the relative difficulty of doing so.
Many of the best regranters seem inactive, and some of the regranter choices are very questionable.
I do agree that our regrantors are less active than I’d like; historically, many of the regrantor grants go out in the last months of the year as the program comes to an end.
On matters of regrantor selection, I do disagree with your sense of taste on eg Leopold and Tamay; it is the case that Manifund is less doom-y/pause-y than you and some other LW-ers are. (But fwiw we’re pretty pluralistic; eg, we helped PauseAI with fiscal sponsorship through last year’s SFF round.) Furthermore, I’d challenge you to evaluate the regrantors by their grants rather than their vibe; I think the one grant Leopold made was pretty good by many lights; and Tamay hasn’t made a grant yet.
We are also open to bringing on other regrantors, and have some budget for this—if you have candidates who you think would do a better job, please do suggest them!
This looks awesome, congrats on announcing this! I would be extremely tempted myself were it not for a bunch of other likely obligations. Approximately how large do you expect this fellowship to be?
Also, structuring Inkhaven as a paid program was interesting; most fellowships (eg Asterisk, FLF, MATS) instead pay their participants. I wonder if this introduces minor adverse selection, in that only writers who are otherwise financially stable can afford to participate. Famously, startup incubators that charge (like OnDeck) are much worse than incubators that pay for equity (like YC or hf0).
I imagine you’ve thought about this a lot already, and you do offer need-based scholarships which is great; also things like LessOnline and Manifest have proven some amount of success for charging for events. But maybe there’s some other way of finding sponsors or funders for these writers? For example, I think Manifund would be quite happy to sponsor 1-3 “full rides” at $5k+ each, for a few bloggers who are interested in topics like AI safety funding, impact evaluations, and new opportunities, which we could occasionally crosspost to the Manifund newsletter. And I imagine other orgs like GGI might be too!
I agree with the paper that paying here probably has minimal effects on devs, but also even if it does have an effect it doesn’t seem likely to change the results, unless somehow the AI group was more more incentivized to be slow than the non AI group.
Minor point of clarity: I briefly attended a talk/debate where Nate Soares and Scott Aaronson (not Sumner) was discussing these topics. Are we thinking of the same event, or was there a separate conversation with Nate Soares and Scott Sumner?
If you’re looking to do an event in San Francisco, lmk, we’d love to host one at Mox!
yes—but one of the nice fundamental properties of prediction markets is that over time, thoughtful people accumulate larger bankrolls
and yes, I think Metaculus comments are generally quite good; Manifold’s are sometimes good, Polymarket/Kalshi are approximately garbage. This is I think partly cultural effects (and product decisions) about who comments vs trades and how those get represented, but yes also reflects something important about the distribution of the underlying audience—that Metaculus has a handful of extremely thoughtful forecasters, polymarket may have that plus a thousand degen gamblers. my contention again is that the structure of markets happily means that the latter can still be quite accurate.
I don’t know if you’ve seen https://brier.fyi/ (and, imo their results should be taken with a grain of salt, though also I might just be salty); but my main takeaway is that they’re all pretty calibrated, and broadly could be cited much more (whether market or poll)