My argument wouldn’t start from “it’s fully negligible”. (Though I do think it’s pretty negligible insofar as they’re investing in big hardware & energy companies, which is most of what’s visible from their public filings. Though private companies wouldn’t be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.
One intuition pump that I find helpful here: Would I think it’d be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.
You say “I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station”. I could say it’s more analogous to “someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused”, which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)
Insofar as your objection isn’t swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I’d be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it’s more of a direct intuition.
My argument wouldn’t start from “it’s fully negligible”. (Though I do think it’s pretty negligible insofar as they’re investing in big hardware & energy companies, which is most of what’s visible from their public filings. Though private companies wouldn’t be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.
One intuition pump that I find helpful here: Would I think it’d be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.
You say “I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station”. I could say it’s more analogous to “someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused”, which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)
Insofar as your objection isn’t swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I’d be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it’s more of a direct intuition.
I’ve left a comment under Schulman’s comment that maybe explains slightly more.