There have been 3 planes (billionaire donors) and 2 have crashed

I’d rather not go into the details about which billionaires are which, so if it’s actually 4 and 3 or 6 and 4, then that may or may not be debatable. I’m much more worried about whether MIRI survives the decade.

It seems to me like this is a good place to figure out ways to handle the contingency where the final billionaire donor crashes and burns, not how to get more billionaires or retain existing ones (and certainly not preventing them from getting bumped off). Signalling unlimited wealth might be valuable for charisma, but at the end of the day it’s better to admit that resources are not infinite if it means living longer than 30 years.

I’ve met some people who were recently funded to start a group house in a rural town in Vermont to do AI safety work, since rents in Vermont are incredibly low, which makes it one of the most cost-effective places in the US to research AI safety. Ultimately, the goal is that people with smaller and smaller amounts of savings could take a sabbatical to a Vermont group house and do research for free for 2-10 years, without working full-time or even part-time at some random software engineering job.

The main problem here is network effects. I don’t remember the details, but they will have to drive at least 3 hours to Boston once a month (and probably more like 5-6 hours). Otherwise, they will be effectively alone in the middle of nowhere, totally dependent on the internet to exchange and verify ideas with other AI safety-minded people (and all the risks entailed by filtering most of your human connection through the internet).

The main problem with the Vermont group house is that there’s currently only three of them. If there were ten really smart people in Vermont researching existential risk, then it would be easier to handle the isolation with, say, shoulder advisors. Plus, if it were up to me, they’d be in rural Virginia (or parts of West Virginia) 5-6 hours away from Washington, D.C., not Boston, although the people who picked Vermont and funded it might know things I don’t (disclaimer: they had the idea first, not me, I only discovered the brilliance behind it after meeting a Vermont person).

Ultimately, though, it’s obviously better for AI-safety-affiliated people to be located within the metropolitan areas of San Francisco, New York, Boston, Washington D.C., and London. New people and new conversations are the lifeblood of any organization and endeavor. But the reality of the situation is that we don’t live the kind of world where all the people at MIRI get tech-worker salaries, just because they should; that money has to come from someone, and the human tendency to refuse to seriously think about contingencies just because they’re “unthinkably horrible” is the entire reason why a bunch of hobbyists from SF are humanity’s first line of defense in the first place. We could absolutely end up in a situation where MIRI needs to relocate from Berkeley to rural Vermont. It would be better than having them work part-time training AI for random firms (or, god forbid, working full time as ordinary software engineers).

So right now seems like the perfect time to start exchanging tips for saving money, setting up group houses in the best possible places, and the prioritization tradeoffs between scenarios where everyone becomes much poorer (e.g. from a second Cold War or a 2008-style economic megafailure upending economic status quos far further than anything in 2020 or 2022) and scenarios where current living conditions are maintained. Because it can always, always get worse.