fwiw my guess is that OP didn’t ask its grantees to do open-source LLM biorisk work at all; I think its research grantees generally have lots of freedom.
(I’ve worked for an OP-funded research org for 1.5 years. I don’t think I’ve ever heard of OP asking us to work on anything specific, nor of us working on something because we thought OP would like it. Sometimes we receive restricted, project-specific grants, but I think those projects were initiated by us. Oh, one exception: Holden’s standards-case-studies project.)
Also note that OpenPhil has funded the Future of Humanity Institute, the organization who houses the author of the paper 1a3orn cited for the claim that knowledge is not the main blocker for creating dangerous biological threats. My guess is that the dynamic 1a3orn describes is more about what things look juicy to the AI safety community, and less about funders specifically.
fwiw my guess is that OP didn’t ask its grantees to do open-source LLM biorisk work at all; I think its research grantees generally have lots of freedom.
(I’ve worked for an OP-funded research org for 1.5 years. I don’t think I’ve ever heard of OP asking us to work on anything specific, nor of us working on something because we thought OP would like it. Sometimes we receive restricted, project-specific grants, but I think those projects were initiated by us. Oh, one exception: Holden’s standards-case-studies project.)
Also note that OpenPhil has funded the Future of Humanity Institute, the organization who houses the author of the paper 1a3orn cited for the claim that knowledge is not the main blocker for creating dangerous biological threats. My guess is that the dynamic 1a3orn describes is more about what things look juicy to the AI safety community, and less about funders specifically.
You meant to say “Future of Humanity Institute”.
Yet more proof that one of those orgs should change their name.