I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I’ve been focusing on and more like better policies at labs and not engaging in particular kinds of risky research.
Out of your proposal, it seems to me that the LLM question is a policy question. Faster evaluation of vaccines also is a lot about policy.
In general, that sentiment sounds a bit like “It’s easy to search the keys under the lampost, so that’s what I will do”.
Esvelt doesn’t have in his threat model “people who work on vaccines release the pathogen for their own gain” the way Bruce Edwards Ivins did according to the FBI.
Esvelt does say dangerous things like “Only after intense discussions at the famous Asilomar conference of 1975 did they correctly conclude that recombinant DNA within carefully chosen laboratory-adapted constructs posed no risk of spreading on its own.”
While you might argue that the amount of risk is acceptable, pretending that it’s zero makes Kevin Esvelt not have that much credibility when it comes to the actual act of reducing risk. He lists a bunch of interventions that EA funders can spend their money so that they can feel like they are taking action effective action about biorisk while not addressing the center of the risk.
Out of your proposal, it seems to me that the LLM question is a policy question. Faster evaluation of vaccines also is a lot about policy.
In general, that sentiment sounds a bit like “It’s easy to search the keys under the lampost, so that’s what I will do”.
Esvelt doesn’t have in his threat model “people who work on vaccines release the pathogen for their own gain” the way Bruce Edwards Ivins did according to the FBI.
Esvelt does say dangerous things like “Only after intense discussions at the famous Asilomar conference of 1975 did they correctly conclude that recombinant DNA within carefully chosen laboratory-adapted constructs posed no risk of spreading on its own.”
While you might argue that the amount of risk is acceptable, pretending that it’s zero makes Kevin Esvelt not have that much credibility when it comes to the actual act of reducing risk. He lists a bunch of interventions that EA funders can spend their money so that they can feel like they are taking action effective action about biorisk while not addressing the center of the risk.