I’m curious to know if you have any plans to write up an essay about OpenPhil and the funding landscape.
It would be cool for some to write about how the funding landscape is skewed. Basically, most of the money has gone into trying to make safe the increasingly complex and unscoped AI developments that people are seeing or expecting to happen anyway.
In the last years, there has finally been some funding of groups that actively try to coordinate with an already concerned public to restrict unsafe developments (especially SFF grants funded by Jaan Tallinn). However, people in the OpenPhil network especially have continued to prioritise working with AGI development companies and national security interests, and it’s concerning how this tends to involve making compromises that support a continued race to the bottom.
Anecdotally, I have heard a few people in the community complain that they feel that OpenPhil has made it more difficult to publicly advocate for AI safety policies because they are afraid of how it might negatively affect Anthropic.
I’d be curious for any ways that OpenPhil has specifically made it harder to publicly advocate for AI safety policies. Does anyone have any specific experiences / cases they want to share here?
Glad it’s insightful.
It would be cool for some to write about how the funding landscape is skewed. Basically, most of the money has gone into trying to make safe the increasingly complex and unscoped AI developments that people are seeing or expecting to happen anyway.
In the last years, there has finally been some funding of groups that actively try to coordinate with an already concerned public to restrict unsafe developments (especially SFF grants funded by Jaan Tallinn). However, people in the OpenPhil network especially have continued to prioritise working with AGI development companies and national security interests, and it’s concerning how this tends to involve making compromises that support a continued race to the bottom.
I’d be curious for any ways that OpenPhil has specifically made it harder to publicly advocate for AI safety policies. Does anyone have any specific experiences / cases they want to share here?