This is an excellent write-up. I’m pretty new to the AI safety space, and as I’ve been learning more (especially with regards to the key players involved), I have found myself wondering why more people do not view Dario with a more critical lens. As you detailed, it seems like he was one of the key engines behind scaling, and I wonder if AI progress would have advanced as quickly as it did if he had not championed it. I’m curious to know if you have any plans to write up an essay about OpenPhil and the funding landscape. I know you mentioned Holden’s investments into Anthropic, but another thing I’ve noticed as a newcomer is just how many safety organization OpenPhil has helped to fund. Anecdotally, I have heard a few people in the community complain that they feel that OpenPhil has made it more difficult to publicly advocate for AI safety policies because they are afraid of how it might negatively affect Anthropic.
I’m curious to know if you have any plans to write up an essay about OpenPhil and the funding landscape.
It would be cool for some to write about how the funding landscape is skewed. Basically, most of the money has gone into trying to make safe the increasingly complex and unscoped AI developments that people are seeing or expecting to happen anyway.
In the last years, there has finally been some funding of groups that actively try to coordinate with an already concerned public to restrict unsafe developments (especially SFF grants funded by Jaan Tallinn). However, people in the OpenPhil network especially have continued to prioritise working with AGI development companies and national security interests, and it’s concerning how this tends to involve making compromises that support a continued race to the bottom.
Anecdotally, I have heard a few people in the community complain that they feel that OpenPhil has made it more difficult to publicly advocate for AI safety policies because they are afraid of how it might negatively affect Anthropic.
I’d be curious for any ways that OpenPhil has specifically made it harder to publicly advocate for AI safety policies. Does anyone have any specific experiences / cases they want to share here?
This is an excellent write-up. I’m pretty new to the AI safety space, and as I’ve been learning more (especially with regards to the key players involved), I have found myself wondering why more people do not view Dario with a more critical lens. As you detailed, it seems like he was one of the key engines behind scaling, and I wonder if AI progress would have advanced as quickly as it did if he had not championed it. I’m curious to know if you have any plans to write up an essay about OpenPhil and the funding landscape. I know you mentioned Holden’s investments into Anthropic, but another thing I’ve noticed as a newcomer is just how many safety organization OpenPhil has helped to fund. Anecdotally, I have heard a few people in the community complain that they feel that OpenPhil has made it more difficult to publicly advocate for AI safety policies because they are afraid of how it might negatively affect Anthropic.
Glad it’s insightful.
It would be cool for some to write about how the funding landscape is skewed. Basically, most of the money has gone into trying to make safe the increasingly complex and unscoped AI developments that people are seeing or expecting to happen anyway.
In the last years, there has finally been some funding of groups that actively try to coordinate with an already concerned public to restrict unsafe developments (especially SFF grants funded by Jaan Tallinn). However, people in the OpenPhil network especially have continued to prioritise working with AGI development companies and national security interests, and it’s concerning how this tends to involve making compromises that support a continued race to the bottom.
I’d be curious for any ways that OpenPhil has specifically made it harder to publicly advocate for AI safety policies. Does anyone have any specific experiences / cases they want to share here?