I think this is a good point. I don’t think it makes up for the ~$125M or so that EA has put into forecasting.
mabramov
I agree that forecasting is an ok way to find talent but not much of this has been done.
I agree that in chaos, they are useful but I don’t think they hit the bar for EA funding.
Sure, it’s speculative. If the AIs will use them, then they will also make them and we can just relax for now on the forecasting and let them do it in the future.
I’m not sure I understand the point. A lot of people have donated money to Wikipedia and now they have a big war chest. I agree Wikipedia has been valuable but I’m not sure how you are computing the value. I don’t see any proof that hundreds of thousands of people are making incrementally better decisions though. My point is that this was the hope but it is sort of just asserted without evidence. I’m happy you’ve enjoyed Manifold, I don’t think that means EA money should go to it. There is a very very high bar to clear for EA money.
I’m not sure I understand your point. I agree that Kalshi and Polymarket are big and have attracted users and investment. My point is that they haven’t given this outpouring of social returns that people claim they are to have. They are just good businesses (charging fees on gambling).
I don’t think you can claim millions of dollars from community growth but I’m sure this ROI would then be negative.
I agree that forecasting has been somewhat useful for identifying talent/nerd-sniping. I don’t think very much of this has happened though. I think less than 25 people (to be conservative) have roles due to their forecasting prowess where they were unknown before.
This has been discussed elsewhere.
There is no proof here. Would you say today’s epistemic environment is much better than 20 years ago?
Anything in particular you want expanded upon? I think this is most of what I have to say on the matter. I’ve been saying some form of this opinion for about 3 years now and I’m happy this is finally out there.
Yea, my point is that the bar for EA money needs to be very very high.
It’s on the EA forum. Was posted at the same time!
I’m not sure the norms here but I will just copy over my reply from the EA forum.
Hi Josh, thanks for the response.
I hate to do this, especially at the start, but I want to point out for you and others who have jobs related to forecasting that it’s difficult to convince someone of something when their job relies on them not believing it. I think you should assume that you will think forecasting is more useful than it is.
As for your points, I’ll respond to some of them.
If you want to DM me, I can sign an NDA, and I may update my opinion depending on what these non-public uses of forecasting are.
I don’t think this is all that relevant. I’m not sure what forecasting research has really elicited on AI timelines. I agree that talk about timelines creates a lot of “buzz” around AI but depending on your views, this is good or bad.
I agree that the impact of measurement-oriented research is difficult to measure, but importantly, not impossible. OWID for example should count how much their work is being cited and looked up. Conversely, I think it would be good to estimate, for FRI, how much $$ the change of the decision was worth and by what amount/percentage did FRI make that change more likely. I don’t think you really gave a good reason that FRI should be funded over anything else that simply has very diffuse benefits.
When do you think it’s reasonable, if ever, for the EA community to “give up” on funding more forecasting work?
If I’m being cynical, almost every field can say “AI will transform the field” though I’m not sure how much this is worth debating.
Yeah, I just don’t agree that reality has played out like AI 2027 in any meaningful way that isn’t very obvious. It’s too early to say. Basically, no meaningful predictions are made until the end of 2026, so we are too early to say. It’s just too early to claim victory.
I have been meaning to write up my critiques of AI 2027 but I have too many of these kinds of posts to write up and I’m a slow writer.
FWIW, this does not change my mind on my OP, though this is interesting.
Basically just +1 on what Michael said. How are you using markets on nuclear war in your decision making? Very concretely, can you name a decision you made differently due to these markets?
I dont think its true that the news media is now more rational than it used to be. Outlandish nonsense is still said all the time. Its also not clear to me that it would matter that much, even if it were true
100B in revenue seems awfully low. For context, Walmart did 700B in revenue last year and Toyota did 330B. Neither company is exactly close to AGI. 100B is like 0.1% of wGDP. Its a lot but its hard to draw a line from that to AGI. I think 1T minimum for this kind of argument and I think closer to 10T for this line of reasoning.
I think from the time I started to the time I stopped, I didnt get any better. I was just as reasonable at both points in time
I am mostly talking about Tetlockian forecasting. I am talking about other versions of it too, though, including AI 2027.
I didn’t want to argue against AI 2027 type stuff in this post but on net, I think AI 2027 made some very aggressive predictions, that will turn out to be wrong (even if you give double the time for them to occur) and I think that AI safety people will end up looking silly, like the boy who cried wolf.
For two concrete examples:
“By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas.”. This one is easy to operationalize. I would bet that by the end of 2032, less than 20% of the current Earth’s oceans will be taken over by the “robot economy”.
“June 2027: Most of the humans at OpenBrain can’t usefully contribute anymore.”
My current top things I am seeking to fund in AI x Animals work.
I want a better benchmark for animal harms, made with input from lab employees. I gave CAML some funding earlier. I think it was okay as a first pass, but nowhere near good enough. I think this will be expensive to create, but good.
Sentience Charters/Constitution/Lobby the labs to put things in system cards, constitutions, make classifiers for prompts that involve animals, etc.
Humane tech: Welfare tech should be made by welfare people. We should be actively involved in industry, shaping methods, practices and tech (stunners, ovo-sexing, genetics, etc.) should be made by us so we make good decisions.
Insects/Neglected Species work.
I wouldn’t say I “nudged” him. He was doing it. I invested since I thought it was a good investment (it has been). They had no problem raising money, and my investment replaced part of one of the other investors’ cheques.
I wouldn’t have included this, especially since it’s a private investment, but Austin really wanted to.
I have donated a lot of money recently to animal welfare (~$450k in the last 5 months). I would have donated less if I had not had this investment.
Mechanize sells environments to AI labs (this is where all revenue comes from) and so if you think investing in the labs is ok, so should investing in Mechanize.
I agree with this (and made this assessment about a month ago). I have asked Remmelt for payment. Id be happy to make a new similar bet. I dont see reason to believe there will be an AI crash.
I want to be a counterpoint to this. Ariel Simnegar and I made AltX, which, while we shut down, I think it’s hard to say did anything ethically questionable apart from merely trading crypto. We didn’t do any scams, no pump and dumps, no nothing. We didn’t make crazy amounts of money but we made a decent amount of money for our investors and ultimately decided to shut down since it seemed we would have trouble raising enough money and scaling our arbitrage strategies to make the effort worthwhile. All investments and profits were returned to investors without issue and I continue to have a good relationship with our investors.
I appreciate this post, upvoted. I agree with basically all the reasons for donating. Alex Bores is one of the few potential politicians who has shown any care at all about existential risk from AI, has great EA-minded staff around him, cares about other EA priorities (like AW), and rare for a politician, seems like he might be an overall decent person.
I want to push back on the career implications/career capital costs of making a donation like this. I think EAs are, by in large, far too paranoid about these kind of risks and stress out about analyzing them so I want to give the following points.
Nobody cares. People in practice simply don’t look up this type of stuff. They just don’t. Have you ever done checked the political donations of even your closest friends/family? If you’ve ever hired somebody, have you ever looked into them on this type of thing on a background check? Nobody is following your life that attentively other than you. Everyone else is like you, they don’t look up random people’s political donations. You can dance and donate like nobody’s watching because nobody is.
This type of donation is extremey defensible and justifiable. When asked about it, you simply say “yes, existential risk from AI is one of my top priorities and Alex was one of the few politicians taking it seriously at the time”. If he does something bad in the future you just say “yes, at the time he seemed to care about AI safety. Unfortunately I was wrong”.
People forget and change their minds. The current President, Donald Trump is not someone who most would consider to be accepting of criticism and dissent, to put it lightly. With that in mind, here are some things JD Vance has said about Trump prior to become Vice President:
“My god, what an idiot.”
“America’s Hitler” or a “cynical asshole like Nixon.”
“I’m a ‘Never Trump’ guy. I never liked him.”
“Trump is cultural heroin.”
Other Trump appointees have said similar things. In practice, this type of stuff just doesn’t matter. The half life of the importance of donations/speech on careers is extremely short, if it even matters in the first place
Here are some more additional reasons to make this donation:
There’s a limit to who can donate and how much they can donate. Only US citizens and permanent residents can donate up to a certain amout.
As much as possible, it’d be good for there to be a political coalition for people that care about AI safety. Showing that this exists will make AI safety a more politically salient issue, somewhat like environmentalism.
This is a great post. Good eye for catching this and making the connections here. I think I expect to see more “cutting corners” like this though I’m not sure what to do about it since I don’t think internally it will feel like corners are being cut rather than necessary updates that are only obvious in hindsight.
Remmelt and I have made a bet at $5k:$25k odds on if there will be an AI market crash by the end of 2026 with a crash being designated if 2⁄3 of the following criteria are met:
OpenAI 2025 or 2026 yearly revenue is below $1.6 billion.
Anthropic 2025 or 2026 yearly revenue is below $400 million.
Nvidia revenue for any quarter in the range of Q3 2025 to Q4 2026 under the ‘data center’ category (covering the same revenue items, even if renamed/moved to something else) is below $8.5 billion.
Source for the first two criteria will be public statements by the companies or by credible reporting. Nvidia data centre revenue will be as reported by Nvidia.
We will make a formal post on it shortly
They will still be funding lots of forecasting, just not through a dedicated fund.