People seem to be reacting to this as though it is bad news. Why? I’d guess the net harm caused by these investments is negligible and this seems like a reasonable earning to give strategy.
Leopold himself also IMO seems to me like a kind of low-integrity dude and Shulman lending his skills and credibility to empowering him seems pretty bad for the world (I don’t have this take confidently, but that’s where a lot of my negative reaction came from).
Situational awareness did seem like it was basically trying to stoke a race, seemed to me like it was incompatible with other things he had told me and I had heard about his beliefs from others, and then he did go on and leverage that thing of all things into an investment fund which does seem like it would straightforwardly hasten the end. Like, my honest guess is that he did write Situational Awareness largely to become more powerful in the world, not to help it orient, and turning it into a hedge fund was a bunch of evidence in that direction.
Also some other interactions I’ve had with him where he made a bunch of social slap-down motions towards people taking AGI or AGI-risk seriously in ways that seemed very power-play optimized.
I also had a negative reaction to the race-stoking and so forth, but also, I feel like you might be judging him too harshly from that evidence? Consider for example that Leopold, like me, was faced with a choice between signing the NDA and getting a huge amount of money, and like me, he chose the freedom to speak. A lot of people give me a lot of credit for that and I think they should give Leopold a similar amount of credit.
The fund manages 2 billion dollars, and (from the linked webpage on their holdings) puts the money into chipmakers and other AI/tech companies. The money will be used to fund development of AI hardware and probably software, leading to growth in capabilities.
If you’re going to argue that 2 billion AUM is too small an amount to have an impact on the cash flows of these companies and make developing AI easier, I think you would be incorrect (even openai and anthropic are only raising single digit billions at the moment).
You might argue that other people would invest similar amounts anyways, so its better for the “people in the know” to do it and “earn to give”. I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station. I will also note that “if I don’t do it, they’ll just find somebody else” is a generic excuse built on the notion of a perfectly efficient market. In fact, this kind of reasoning allows you to do just about anything with an arbitrarily large negative impact for personal gain, so long as someone else exists who might do it if you don’t.
My argument wouldn’t start from “it’s fully negligible”. (Though I do think it’s pretty negligible insofar as they’re investing in big hardware & energy companies, which is most of what’s visible from their public filings. Though private companies wouldn’t be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.
One intuition pump that I find helpful here: Would I think it’d be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.
You say “I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station”. I could say it’s more analogous to “someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused”, which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)
Insofar as your objection isn’t swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I’d be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it’s more of a direct intuition.
I will also note that “if I don’t do it, they’ll just find somebody else” is a generic excuse built on the notion of a perfectly efficient market.
The existence of the Situational Awareness Fund is specifically predicated on the assumption that markets are not efficient, and if they don’t invest in AI then AI will be under-invested.
(I don’t have a strong position on whether that assumption is correct, but the people running the fund ought to believe it.)
People seem to be reacting to this as though it is bad news. Why? I’d guess the net harm caused by these investments is negligible and this seems like a reasonable earning to give strategy.
Leopold himself also IMO seems to me like a kind of low-integrity dude and Shulman lending his skills and credibility to empowering him seems pretty bad for the world (I don’t have this take confidently, but that’s where a lot of my negative reaction came from).
What gives you the impression of low integrity?
Situational awareness did seem like it was basically trying to stoke a race, seemed to me like it was incompatible with other things he had told me and I had heard about his beliefs from others, and then he did go on and leverage that thing of all things into an investment fund which does seem like it would straightforwardly hasten the end. Like, my honest guess is that he did write Situational Awareness largely to become more powerful in the world, not to help it orient, and turning it into a hedge fund was a bunch of evidence in that direction.
Also some other interactions I’ve had with him where he made a bunch of social slap-down motions towards people taking AGI or AGI-risk seriously in ways that seemed very power-play optimized.
Again, none of this is a confident take.
I also had a negative reaction to the race-stoking and so forth, but also, I feel like you might be judging him too harshly from that evidence? Consider for example that Leopold, like me, was faced with a choice between signing the NDA and getting a huge amount of money, and like me, he chose the freedom to speak. A lot of people give me a lot of credit for that and I think they should give Leopold a similar amount of credit.
The fund manages 2 billion dollars, and (from the linked webpage on their holdings) puts the money into chipmakers and other AI/tech companies. The money will be used to fund development of AI hardware and probably software, leading to growth in capabilities.
If you’re going to argue that 2 billion AUM is too small an amount to have an impact on the cash flows of these companies and make developing AI easier, I think you would be incorrect (even openai and anthropic are only raising single digit billions at the moment).
You might argue that other people would invest similar amounts anyways, so its better for the “people in the know” to do it and “earn to give”. I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station. I will also note that “if I don’t do it, they’ll just find somebody else” is a generic excuse built on the notion of a perfectly efficient market. In fact, this kind of reasoning allows you to do just about anything with an arbitrarily large negative impact for personal gain, so long as someone else exists who might do it if you don’t.
My argument wouldn’t start from “it’s fully negligible”. (Though I do think it’s pretty negligible insofar as they’re investing in big hardware & energy companies, which is most of what’s visible from their public filings. Though private companies wouldn’t be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.
One intuition pump that I find helpful here: Would I think it’d be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.
You say “I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station”. I could say it’s more analogous to “someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused”, which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)
Insofar as your objection isn’t swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I’d be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it’s more of a direct intuition.
I’ve left a comment under Schulman’s comment that maybe explains slightly more.
The existence of the Situational Awareness Fund is specifically predicated on the assumption that markets are not efficient, and if they don’t invest in AI then AI will be under-invested.
(I don’t have a strong position on whether that assumption is correct, but the people running the fund ought to believe it.)
Further discussion by Carl is here.