The problem is that empirically, rich people who hear about AI safety (e.g. Musk, OpenPhilanthropy) seem to end up founding (OpenAI, xAI) or funding (Anthropic) AI labs instead. And even if you specifically want to fund AI safety work rather than AI capabilities, delegation is difficult regardless of how much money you have.
That is a serious concern. It is possible that advocacy could backfire. That said, I’m not sure the correct hypothesis isn’t just “rich people start AI companies and sometimes advocacy isn’t enough to stop this”. Either way, the solution seems to be better advocacy. Maybe split testing, focus testing, or other market research before deploying a strategy. Devoting some intellectual resources to advocacy improvement, at least short term.
As for the knowledge bottleneck—I think that’s a very good point. My comment doesn’t remove that bottleneck, just shift it to advocacy (i.e. maybe we need better knowledge on how or what to advocate).
The problem is that empirically, rich people who hear about AI safety (e.g. Musk, OpenPhilanthropy) seem to end up founding (OpenAI, xAI) or funding (Anthropic) AI labs instead. And even if you specifically want to fund AI safety work rather than AI capabilities, delegation is difficult regardless of how much money you have.
That is a serious concern. It is possible that advocacy could backfire. That said, I’m not sure the correct hypothesis isn’t just “rich people start AI companies and sometimes advocacy isn’t enough to stop this”. Either way, the solution seems to be better advocacy. Maybe split testing, focus testing, or other market research before deploying a strategy. Devoting some intellectual resources to advocacy improvement, at least short term.
As for the knowledge bottleneck—I think that’s a very good point. My comment doesn’t remove that bottleneck, just shift it to advocacy (i.e. maybe we need better knowledge on how or what to advocate).