ah, sorry about mis-framing your comment! i tend to use the term “FDT” casually to refer to “instead of individual acts, try to think about policies and how would they apply to agents in my reference class(es)” (which i think does apply here, as i consider us sharing a plausible reference class).
jaan
There is a question about whether the safety efforts your money supported at or around the companies ended up compensating for the developments
yes. more generally, sign uncertainty sucks (and is a recurring discussion topic in SFF round debates).
It seems that if Dustin and you had not funded Series A of Anthropic, they would have had a harder time starting up.
they certainly would not have had harder time setting up the company nor getting the equivalent level of funding (perhaps even at a better valuation). it’s plausible that pointing to “aligned” investors helped with initial recruiting — but that’s unclear to me. my model of dario/founders just did not want the VC profit-motive to play a big part in the initial strategy.
Does this have to do with liquidity issues or something else?
yup, liquidity (also see the comments below), crypto prices, and about half of my philanthropy not being listed on that page. also SFF s-process works with aggregated marginal value functions, so there is no hard cutoff (hence the “evaluators could not make grants that they wanted to” sentence makes less sense than in traditional “chunky and discretionary” philanthropic context).
indeed, illiquidity is a big constraint to my philanthropy, so in very short timelines my “invest (in startups) and redistribute” policy does not work too well.
These investors were Dustin Moskovitz, Jaan Tallinn and Sam Bankman-Fried
nitpick: SBF/FTX did not participate in the initial round—they bought $500M worth of non-voting shares later, after the company was well on its way.
more importantly, i often get the criticism that “if you’re concerned with AI then why do you invest in it”. even though the critics usually (and incorrectly) imply that the AI would not happen (at least not nearly as fast) if i did not invest, i acknowledge that this is a fair criticism from the FDT perspective (as witnessed by wei dai’s recent comment how he declined the opportunity to invest in anthropic).
i’m open to improving my policy (which is—empirically—also correllated with the respective policies of dustin as well as FLI) of—roughly—“invest in AI and spend the proceeds on AI safety”—but the improvements need to take into account that a) prominent AI founders have no trouble raising funds (in most of the alternative worlds anthropic is VC funded from the start, like several other openAI offshoots), b) the volume of my philanthropy is correllated with my net worth, and c) my philanthropy is more needed in the worlds where AI progresses faster.
EDIT: i appreciate the post otherwise—upvoted!
- 6 Sep 2025 15:06 UTC; 32 points) 's comment on peterbarnett’s Shortform by (
DeepMind was funded by Jaan Tallinn and Peter Thiel
i did not participate in DM’s first round (series A) -- my investment fund invested in series B and series C, and ended up with about 1% stake in the company. this sentence is therefore moderately misleading.
the video that made FFT finally click for me:
this was good.
- 23 Jul 2025 22:16 UTC; 7 points) 's comment on Love stays loved (formerly “Skin”) by (
my most fun talk made a similar claim:
no plan, my timelines are quite uncertain (and even if i knew for sure that money will stop mattering in 2 years, it’s not obvious at all what to spend it on).
yup, it’s about options (both in my philanthropy as well as investments). that, and some path-dependency: when i got interested in AI safety, almost all the people who knew anything about it were in the bay area (plus some in oxford at the FHI) -- so this is where i found my collaborators.
correct! i’ve tried to use this symmetry argument (“how do you know you’re not the clone?”) over the years to explain the multiverse: https://youtu.be/29AgSo6KOtI?t=869
interesting! still, aestivation seems to easily trump the black hole heat dumping, no?
dyson spheres are for newbs; real men (and ASIs, i strongly suspect) starlift.
thank you for continuing to stretch the overton window! note that, luckily, the “off-switch” is now inside the window (though just barely so, and i hear that big tech is actively—and very myopically—lobbying against on-chip governance). i just got back from a UN AIAB meeting and our interim report does include the sentence “Develop and collectively maintain an emergency response capacity, off-switches and other stabilization measures” (while rest of the report assumes that AI will not be a big deal any time soon).
thanks! basically, i think that the top priority should be to (quickly!) slow down the extinction race. if that’s successful, we’ll have time for more deliberate interventions — and the one you propose sounds confidently net positive to me! (with sign uncertainties being so common, confident net positive interventions are surprisingly rare).
AI takeover.
i might be confused about this but “witnessing a super-early universe” seems to support “a typical universe moment is not generating observer moments for your reference class”. but, yeah, anthropics is very confusing, so i’m not confident in this.
three most convincing arguments i know for OP’s thesis are:
-
atoms on earth are “close by” and thus much more valuable to fast running ASI than the atoms elsewhere.
-
(somewhat contrary to the previous argument), an ASI will be interested in quickly reaching the edge of the hubble volume, as that’s slipping behind the cosmic horizon — so it will starlift the sun for its initial energy budget.
-
robin hanson’s “grabby aliens” argument: witnessing a super-young universe (as we do) is strong evidence against it remaining compatible with biological life for long.
that said, i’m also very interested in the counter arguments (so thanks for linking to paul’s comments!) — especially if they’d suggest actions we could take in preparation.
-
i would love to see competing RSPs (or, better yet, RTDPs, as @Joe_Collman pointed out in a cousin comment).
1. i agree. as wei explicitly mentions, signalling approval was a big reason why he did not invest, and it definitely gave me a pause, too (i had a call with nate & eliezer on this topic around that time). still, if i try to imagine a world where i declined to invest, i don’t see it being obviously better (ofc it’s possible that the difference is still yet to reveal itself).
concerns about startups being net negative are extremely rare (outside of AI, i can’t remember any other case—though it’s possible that i’m forgetting some). i believe this is the main reason why VCs and SV technologists tend to be AI xrisk deniers (another being that it’s harder to fundraise as a VC/technologist if you have sign uncertainty) -- their prior is too strong to consider AI an exception. a couple of years ago i was at an event in SF where top tech CEOs talked about wanting to create “lots of externalties”, implying that externalities can only be positive.
2. yeah, the priorities page is now more than a year old and in bad need of an update. thanks for the criticism—fwded to the people drafting the update.