The fund manages 2 billion dollars, and (from the linked webpage on their holdings) puts the money into chipmakers and other AI/tech companies. The money will be used to fund development of AI hardware and probably software, leading to growth in capabilities.
If you’re going to argue that 2 billion AUM is too small an amount to have an impact on the cash flows of these companies and make developing AI easier, I think you would be incorrect (even openai and anthropic are only raising single digit billions at the moment).
You might argue that other people would invest similar amounts anyways, so its better for the “people in the know” to do it and “earn to give”. I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station. I will also note that “if I don’t do it, they’ll just find somebody else” is a generic excuse built on the notion of a perfectly efficient market. In fact, this kind of reasoning allows you to do just about anything with an arbitrarily large negative impact for personal gain, so long as someone else exists who might do it if you don’t.
My argument wouldn’t start from “it’s fully negligible”. (Though I do think it’s pretty negligible insofar as they’re investing in big hardware & energy companies, which is most of what’s visible from their public filings. Though private companies wouldn’t be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.
One intuition pump that I find helpful here: Would I think it’d be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.
You say “I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station”. I could say it’s more analogous to “someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused”, which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)
Insofar as your objection isn’t swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I’d be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it’s more of a direct intuition.
I will also note that “if I don’t do it, they’ll just find somebody else” is a generic excuse built on the notion of a perfectly efficient market.
The existence of the Situational Awareness Fund is specifically predicated on the assumption that markets are not efficient, and if they don’t invest in AI then AI will be under-invested.
(I don’t have a strong position on whether that assumption is correct, but the people running the fund ought to believe it.)
The fund manages 2 billion dollars, and (from the linked webpage on their holdings) puts the money into chipmakers and other AI/tech companies. The money will be used to fund development of AI hardware and probably software, leading to growth in capabilities.
If you’re going to argue that 2 billion AUM is too small an amount to have an impact on the cash flows of these companies and make developing AI easier, I think you would be incorrect (even openai and anthropic are only raising single digit billions at the moment).
You might argue that other people would invest similar amounts anyways, so its better for the “people in the know” to do it and “earn to give”. I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station. I will also note that “if I don’t do it, they’ll just find somebody else” is a generic excuse built on the notion of a perfectly efficient market. In fact, this kind of reasoning allows you to do just about anything with an arbitrarily large negative impact for personal gain, so long as someone else exists who might do it if you don’t.
My argument wouldn’t start from “it’s fully negligible”. (Though I do think it’s pretty negligible insofar as they’re investing in big hardware & energy companies, which is most of what’s visible from their public filings. Though private companies wouldn’t be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.
One intuition pump that I find helpful here: Would I think it’d be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.
You say “I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station”. I could say it’s more analogous to “someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused”, which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)
Insofar as your objection isn’t swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I’d be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it’s more of a direct intuition.
I’ve left a comment under Schulman’s comment that maybe explains slightly more.
The existence of the Situational Awareness Fund is specifically predicated on the assumption that markets are not efficient, and if they don’t invest in AI then AI will be under-invested.
(I don’t have a strong position on whether that assumption is correct, but the people running the fund ought to believe it.)