I think this essay is overall correct and very important. I appreciate you trying to protect the epistemic commons, and I think such work should be compensated. I disagree with some of the tone and framing, but overall I believe you are right that the current public evidence for AI-enabled biosecurity risks is quite bad and substantially behind the confidence/discourse in the AI alignment community.
I think the more likely hypothesis for the causes of this situation isn’t particularly related to Open Philanthropy and is much more about bad social truth seeking processes. It seems like there’s a fair amount of vibes-based deferral going on, whereas e.g., much of the research you discuss is subtly about future systems in a way that is easy to miss. I think we’ll see more and more of this as AI deployment continues and the world gets crazier — the ratio of “things people have carefully investigated” to “things they say” is going to drop. I expect in this current case, many of the more AI alignment people didn’t bother looking too much into the AI-biosecurity stuff because it feels outside their domain and more-so took headline results at face value, I definitely did some of this. This deferral process is exacerbated by the risk of info hazards and thus limited information sharing.
I think this essay is overall correct and very important. I appreciate you trying to protect the epistemic commons, and I think such work should be compensated. I disagree with some of the tone and framing, but overall I believe you are right that the current public evidence for AI-enabled biosecurity risks is quite bad and substantially behind the confidence/discourse in the AI alignment community.
I think the more likely hypothesis for the causes of this situation isn’t particularly related to Open Philanthropy and is much more about bad social truth seeking processes. It seems like there’s a fair amount of vibes-based deferral going on, whereas e.g., much of the research you discuss is subtly about future systems in a way that is easy to miss. I think we’ll see more and more of this as AI deployment continues and the world gets crazier — the ratio of “things people have carefully investigated” to “things they say” is going to drop. I expect in this current case, many of the more AI alignment people didn’t bother looking too much into the AI-biosecurity stuff because it feels outside their domain and more-so took headline results at face value, I definitely did some of this. This deferral process is exacerbated by the risk of info hazards and thus limited information sharing.