Unlike most other charitable causes, AI safety affects rich people. This suggests to me that advocacy may be a more effective strategy than direct funding.
The problem is that empirically, rich people who hear about AI safety (e.g. Musk, OpenPhilanthropy) seem to end up founding (OpenAI, xAI) or funding (Anthropic) AI labs instead. And even if you specifically want to fund AI safety work rather than AI capabilities, delegation is difficult regardless of how much money you have.
That is a serious concern. It is possible that advocacy could backfire. That said, I’m not sure the correct hypothesis isn’t just “rich people start AI companies and sometimes advocacy isn’t enough to stop this”. Either way, the solution seems to be better advocacy. Maybe split testing, focus testing, or other market research before deploying a strategy. Devoting some intellectual resources to advocacy improvement, at least short term.
As for the knowledge bottleneck—I think that’s a very good point. My comment doesn’t remove that bottleneck, just shift it to advocacy (i.e. maybe we need better knowledge on how or what to advocate).
Indeed. But what can these rich people do about that? Most of them don’t have an expertise to evaluate particular AI alignment projects. They need intermediaries for that. And there are funds in place that do the job.
This is basically how alignment funding ecosystem works. Community advocates to rich people and they donate money to said funds.
Like you said—the rich people can do the bulk of the donating to research on alignment. Less rich people can either focus on advocacy or donating to those doing advocacy. If the ecosystem is already doing this, then that’s great!
Unlike most other charitable causes, AI safety affects rich people. This suggests to me that advocacy may be a more effective strategy than direct funding.
The problem is that empirically, rich people who hear about AI safety (e.g. Musk, OpenPhilanthropy) seem to end up founding (OpenAI, xAI) or funding (Anthropic) AI labs instead. And even if you specifically want to fund AI safety work rather than AI capabilities, delegation is difficult regardless of how much money you have.
That is a serious concern. It is possible that advocacy could backfire. That said, I’m not sure the correct hypothesis isn’t just “rich people start AI companies and sometimes advocacy isn’t enough to stop this”. Either way, the solution seems to be better advocacy. Maybe split testing, focus testing, or other market research before deploying a strategy. Devoting some intellectual resources to advocacy improvement, at least short term.
As for the knowledge bottleneck—I think that’s a very good point. My comment doesn’t remove that bottleneck, just shift it to advocacy (i.e. maybe we need better knowledge on how or what to advocate).
Indeed. But what can these rich people do about that? Most of them don’t have an expertise to evaluate particular AI alignment projects. They need intermediaries for that. And there are funds in place that do the job.
This is basically how alignment funding ecosystem works. Community advocates to rich people and they donate money to said funds.
Like you said—the rich people can do the bulk of the donating to research on alignment. Less rich people can either focus on advocacy or donating to those doing advocacy. If the ecosystem is already doing this, then that’s great!