Basically just +1 on what Michael said. How are you using markets on nuclear war in your decision making? Very concretely, can you name a decision you made differently due to these markets?
mabramov
I dont think its true that the news media is now more rational than it used to be. Outlandish nonsense is still said all the time. Its also not clear to me that it would matter that much, even if it were true
100B in revenue seems awfully low. For context, Walmart did 700B in revenue last year and Toyota did 330B. Neither company is exactly close to AGI. 100B is like 0.1% of wGDP. Its a lot but its hard to draw a line from that to AGI. I think 1T minimum for this kind of argument and I think closer to 10T for this line of reasoning.
I think from the time I started to the time I stopped, I didnt get any better. I was just as reasonable at both points in time
I am mostly talking about Tetlockian forecasting. I am talking about other versions of it too, though, including AI 2027.
I didn’t want to argue against AI 2027 type stuff in this post but on net, I think AI 2027 made some very aggressive predictions, that will turn out to be wrong (even if you give double the time for them to occur) and I think that AI safety people will end up looking silly, like the boy who cried wolf.
For two concrete examples:
“By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas.”. This one is easy to operationalize. I would bet that by the end of 2032, less than 20% of the current Earth’s oceans will be taken over by the “robot economy”.
“June 2027: Most of the humans at OpenBrain can’t usefully contribute anymore.”
Forecasting is Way Overrated, and We Should Stop Funding It
My current top things I am seeking to fund in AI x Animals work.
I want a better benchmark for animal harms, made with input from lab employees. I gave CAML some funding earlier. I think it was okay as a first pass, but nowhere near good enough. I think this will be expensive to create, but good.
Sentience Charters/Constitution/Lobby the labs to put things in system cards, constitutions, make classifiers for prompts that involve animals, etc.
Humane tech: Welfare tech should be made by welfare people. We should be actively involved in industry, shaping methods, practices and tech (stunners, ovo-sexing, genetics, etc.) should be made by us so we make good decisions.
Insects/Neglected Species work.
I wouldn’t say I “nudged” him. He was doing it. I invested since I thought it was a good investment (it has been). They had no problem raising money, and my investment replaced part of one of the other investors’ cheques.
I wouldn’t have included this, especially since it’s a private investment, but Austin really wanted to.
I have donated a lot of money recently to animal welfare (~$450k in the last 5 months). I would have donated less if I had not had this investment.
Mechanize sells environments to AI labs (this is where all revenue comes from) and so if you think investing in the labs is ok, so should investing in Mechanize.
Manifund’s Falcon Fund
I agree with this (and made this assessment about a month ago). I have asked Remmelt for payment. Id be happy to make a new similar bet. I dont see reason to believe there will be an AI crash.
I want to be a counterpoint to this. Ariel Simnegar and I made AltX, which, while we shut down, I think it’s hard to say did anything ethically questionable apart from merely trading crypto. We didn’t do any scams, no pump and dumps, no nothing. We didn’t make crazy amounts of money but we made a decent amount of money for our investors and ultimately decided to shut down since it seemed we would have trouble raising enough money and scaling our arbitrage strategies to make the effort worthwhile. All investments and profits were returned to investors without issue and I continue to have a good relationship with our investors.
I appreciate this post, upvoted. I agree with basically all the reasons for donating. Alex Bores is one of the few potential politicians who has shown any care at all about existential risk from AI, has great EA-minded staff around him, cares about other EA priorities (like AW), and rare for a politician, seems like he might be an overall decent person.
I want to push back on the career implications/career capital costs of making a donation like this. I think EAs are, by in large, far too paranoid about these kind of risks and stress out about analyzing them so I want to give the following points.
Nobody cares. People in practice simply don’t look up this type of stuff. They just don’t. Have you ever done checked the political donations of even your closest friends/family? If you’ve ever hired somebody, have you ever looked into them on this type of thing on a background check? Nobody is following your life that attentively other than you. Everyone else is like you, they don’t look up random people’s political donations. You can dance and donate like nobody’s watching because nobody is.
This type of donation is extremey defensible and justifiable. When asked about it, you simply say “yes, existential risk from AI is one of my top priorities and Alex was one of the few politicians taking it seriously at the time”. If he does something bad in the future you just say “yes, at the time he seemed to care about AI safety. Unfortunately I was wrong”.
People forget and change their minds. The current President, Donald Trump is not someone who most would consider to be accepting of criticism and dissent, to put it lightly. With that in mind, here are some things JD Vance has said about Trump prior to become Vice President:
“My god, what an idiot.”
“America’s Hitler” or a “cynical asshole like Nixon.”
“I’m a ‘Never Trump’ guy. I never liked him.”
“Trump is cultural heroin.”
Other Trump appointees have said similar things. In practice, this type of stuff just doesn’t matter. The half life of the importance of donations/speech on careers is extremely short, if it even matters in the first place
Here are some more additional reasons to make this donation:
There’s a limit to who can donate and how much they can donate. Only US citizens and permanent residents can donate up to a certain amout.
As much as possible, it’d be good for there to be a political coalition for people that care about AI safety. Showing that this exists will make AI safety a more politically salient issue, somewhat like environmentalism.
This is a great post. Good eye for catching this and making the connections here. I think I expect to see more “cutting corners” like this though I’m not sure what to do about it since I don’t think internally it will feel like corners are being cut rather than necessary updates that are only obvious in hindsight.
Our bet on whether the AI market will crash
Remmelt and I have made a bet at $5k:$25k odds on if there will be an AI market crash by the end of 2026 with a crash being designated if 2⁄3 of the following criteria are met:
OpenAI 2025 or 2026 yearly revenue is below $1.6 billion.
Anthropic 2025 or 2026 yearly revenue is below $400 million.
Nvidia revenue for any quarter in the range of Q3 2025 to Q4 2026 under the ‘data center’ category (covering the same revenue items, even if renamed/moved to something else) is below $8.5 billion.
Source for the first two criteria will be public statements by the companies or by credible reporting. Nvidia data centre revenue will be as reported by Nvidia.
We will make a formal post on it shortly
Remmelt, if you wish, I’m happy to operationalize a bet. I think you’re wrong.
That doesn’t seem like the right analogy. The bonds are forced to fold over themselves because electrons repel each other and don’t want to touch. So the natural structures are mostly tetrahedral structures. Think of the vertices of a tetrahedron having edges that shoot towards and meet at the centre and you will see that these form 109° angles. When you imagine a bunch of these connected, you will see that they all start folding over themselves and will need to take up the same space which, is not possible because the electrons will repel. So you get distortions and all kinds of stuff to push them away and then it’s all complicated by a bunch of weak forces. The primary thing giving structure is this long string of covalent bonds.
Also, “forces in the lipid layer surrounding cells” are not proteins
I just made my account but I want to remind everyone that you cannot make inferences on how good your prediction (or how good of a bet this was) based on one data point (how this election turned out). If you want to dig deep into the odds that every state was given, you can start to make a case, but anyone with the gut reaction that since the election was close, this was a bad bet, are wrong.
FWIW, this does not change my mind on my OP, though this is interesting.