I’m a little confused about what’s going on since apparently the explicit goal of the company is to defend against biorisk and make sure that biodefense capabilities keep up with AI developments, and when I first saw this thread I was like “I’m not sure of what exactly they’ll do, but better biodefense is definitely something we need so this sounds like good news and I’m glad that Hannu is working on this”.
I do also feel that the risk of rogue AI makes it much more important to invest in biodefense! I’d very much like it if we had the degree of automated defenses that the “rogue AI creates a new pandemic” threat vector was eliminated entirely. Of course there’s the risk of the AI taking over those labs but in the best case we’ll also have deployed more narrow AI to identify and eliminate all cybersecurity vulnerabilities before that.
And I don’t really see a way to defend against biothreats if we don’t do something like this (which isn’t to say one couldn’t exist, I also haven’t thought about this extensively so maybe there is something), like the human body wouldn’t survive for very long if it didn’t have an active immune system.
Today, we are launching Red Queen Bio (http://redqueen.bio), an AI biosecurity company, with a $15M seed led by OpenAI. Biorisk grows exponentially with AI capabilities. Our mission is to scale biological defenses at the same rate.
Since 2016, I have been building HelixNano, a clinical stage biotech (and still my main gig), with Nikolai Eroshenko. Recently, HelixNano teamed up with OpenAI to push AI bio’s limits. To our surprise, we saw models invent genuinely new wet lab methods (publication soon).
We got super excited. There was a path to superhuman drug designers. But we couldn’t ignore the shadow of superhuman virus designers. A world with breakthrough AI drugs can’t exist without new biological defenses. We spun out Red Queen Bio to build them.
AI biosecurity is a different game from traditional biodefense, with relatively static threats and flat budgets. What do you do when the attack surface grows at the rate of AI progress, driven by trillions of dollars of compute?
Red Queen Bio’s core thesis is **defensive co-scaling.** You have to couple defensive capabilities and funding to the same technological and financial forces that drive the AGI race, otherwise they can’t keep up.
We work with frontier labs to map AI biothreats and pre-build medical countermeasures against them. For co-scaling to work, this needs to improve as models do, and scale with compute. So our pipeline is built upon the leading models themselves, lab automation and RL.
We also need *financial* co-scaling. Governments can’t have exponentially scaling biodefense budgets. But they can create the right market incentives, as they have done for other safety-critical industries. We’re engaging with policymakers on this both in the US and abroad.
RQB’s work is driven by a civilizational need. But the economic incentives are ultimately on our side too. The capital behind what may be the biggest industrial transformation in human history is not going to tolerate unpriced tail risk on the scale of COVID or bigger.
We are committed to cracking the business model for AI biosecurity. We are borrowing from fields like catastrophic risk insurance, and working directly with the labs to figure out what scales. A successful solution can also serve as a blueprint for other AI risks beyond bio.
This is bigger than us. No company, AI lab or government is going to solve defensive co-scaling alone. Accordingly, we are committed to open collaboration with them all. Red Queen Bio is a Public Benefit Corporation, with governance to ensure mission takes precedence over any individual partnership.
In case it’s not obvious, Red Queen Bio and defensive co-scaling are very much inspired by VitalikButerin’s d/acc philosophy. We find it inspiring, but differ in a couple of important ways.
First, we are skeptical that the d/acc approach of building purely defensive capabilities first is possible: in our view, they have to piggyback on general capabilities.
In contrast to d/acc, we also believe it’s hard to maintain defender advantage through de-centralization alone. For the sci-fi fans, writing DARKOME (a near-future biotech thriller) in part changed my mind on this!
But we heartily agree with VitalikButerin on the brightness and centrality of human kindness and agency.
In the face of fast AI timelines and the enormity of the stakes, it’s easy to feel trapped in the AGI race dynamic. But the incentive structures driving it are not physical laws. They are no more real than others we can create.
By launching Red Queen Bio, we are choosing a different race. One where defense keeps up with offense and economics spurs safety.
The starting pistol has gone off. It’s time to run together.
Thanks for sharing, this is extremely important context—I’m way more ok with dual use threats from a company actively trying to reduce bio risk from AI who seem to have vaguely reasonable threat models, than just reckless gain of function people with insane threat models. It’s much less clear to me how much risk is ok to accept from projects actively doing reasonable things to make it better, but it’s clearly non zero (I don’t know if this place is actually doing reasonable things, but Mikhail provides no evidence against)
I think it was pretty misleading for Mikhail not to include this context in the original post.
Uhm yeah valid I guess the issue was illusion of transparency: I mostly copied the original post from my tweet, which was quote-tweeting the announcement, and I didn’t particularly think about adding more context because had it cached that the tweet is fine (I checked with people closely familiar with RQB before tweeting, and it did include all of the context by virtue of quote-tweeting the original announceemnt) and when posting to lw did not realize i’m not directly adding all of the context that was included in the tweet if people don’t click on the link.
Added the context to the original post.
Separately, I think an issue is that they’re incredibly non-transparent about what they’re doing and have been somewhat misleading in their responses to my tweets and not answering any of the questions.
Like, I can see a case for doing gain-of-function research responsibly to develop protection against threats (vaccines, proteins that would bind for viruses, etc.), but this should include incredible transparency, strong security (BSL & computer security & strong guardrails around what exactly AI models have automated access to), etc.
Separately, I think an issue is that they’re incredibly non-transparent about what they’re doing and have been somewhat misleading in their responses to my tweets and not answering any of the questions.
I can’t really fault them for not answering or being fully honest, from their perspective you’re a random dude who’s attacking them publicly and trying to get them lots of bad PR. I think it’s often very reasonable to just not engage in situations like that. Though I would judge them for outright lying
That’s somewhat reasonable. (They did engage though: made a number of comments and quote-tweeted my tweet, without addressing at all the main questions.)
Sure, but there’s a big difference between engaging in PR damage control mode and actually seriously engaging. I don’t take them choosing to be in the former as significant evidence of wrong doing
Since 2016, I have been building HelixNano, a clinical stage biotech (and still my main gig), with Nikolai Eroshenko. Recently, HelixNano teamed up with OpenAI to push AI bio’s limits. To our surprise, we saw models invent genuinely new wet lab methods (publication soon).
We got super excited. There was a path to superhuman drug designers. But we couldn’t ignore the shadow of superhuman virus designers. A world with breakthrough AI drugs can’t exist without new biological defenses. We spun out Red Queen Bio to build them.
Based on this, they didn’t need to set up a new company. They already had an existing biotech company that was focused on its own research, when they realized that “oh fuck, based on our current research things could get really bad unless someone does something”… and then they went Heroic Responsibility and spun out a whole new company to do something, rather than just pretending that no dangers existed or making vague noises and asking for government intervention or something.
It feels like being hostile toward them is a bit Copenhagen Ethics, in that if they hadn’t tried to do the right thing, it’s possible that nobody would have heard about this and things would have been much easier for them. But since they were thinking about their consequences of their research and decided to do something about it and said that in public, they’re now getting piled on for not answering every question they’re asked on X. (And if I were them, I might also have concluded that the other side is so hostile that every answer might be interpreted in the worst possible light and that it’s better not to engage.)
I’m a little confused about what’s going on since apparently the explicit goal of the company is to defend against biorisk and make sure that biodefense capabilities keep up with AI developments, and when I first saw this thread I was like “I’m not sure of what exactly they’ll do, but better biodefense is definitely something we need so this sounds like good news and I’m glad that Hannu is working on this”.
I do also feel that the risk of rogue AI makes it much more important to invest in biodefense! I’d very much like it if we had the degree of automated defenses that the “rogue AI creates a new pandemic” threat vector was eliminated entirely. Of course there’s the risk of the AI taking over those labs but in the best case we’ll also have deployed more narrow AI to identify and eliminate all cybersecurity vulnerabilities before that.
And I don’t really see a way to defend against biothreats if we don’t do something like this (which isn’t to say one couldn’t exist, I also haven’t thought about this extensively so maybe there is something), like the human body wouldn’t survive for very long if it didn’t have an active immune system.
Thanks for sharing, this is extremely important context—I’m way more ok with dual use threats from a company actively trying to reduce bio risk from AI who seem to have vaguely reasonable threat models, than just reckless gain of function people with insane threat models. It’s much less clear to me how much risk is ok to accept from projects actively doing reasonable things to make it better, but it’s clearly non zero (I don’t know if this place is actually doing reasonable things, but Mikhail provides no evidence against)
I think it was pretty misleading for Mikhail not to include this context in the original post.
Uhm yeah valid I guess the issue was illusion of transparency: I mostly copied the original post from my tweet, which was quote-tweeting the announcement, and I didn’t particularly think about adding more context because had it cached that the tweet is fine (I checked with people closely familiar with RQB before tweeting, and it did include all of the context by virtue of quote-tweeting the original announceemnt) and when posting to lw did not realize i’m not directly adding all of the context that was included in the tweet if people don’t click on the link.
Added the context to the original post.
Separately, I think an issue is that they’re incredibly non-transparent about what they’re doing and have been somewhat misleading in their responses to my tweets and not answering any of the questions.
Like, I can see a case for doing gain-of-function research responsibly to develop protection against threats (vaccines, proteins that would bind for viruses, etc.), but this should include incredible transparency, strong security (BSL & computer security & strong guardrails around what exactly AI models have automated access to), etc.
Thanks for adding the context!
I can’t really fault them for not answering or being fully honest, from their perspective you’re a random dude who’s attacking them publicly and trying to get them lots of bad PR. I think it’s often very reasonable to just not engage in situations like that. Though I would judge them for outright lying
That’s somewhat reasonable. (They did engage though: made a number of comments and quote-tweeted my tweet, without addressing at all the main questions.)
Sure, but there’s a big difference between engaging in PR damage control mode and actually seriously engaging. I don’t take them choosing to be in the former as significant evidence of wrong doing
Agree; I’d also like to emphasize this part:
Based on this, they didn’t need to set up a new company. They already had an existing biotech company that was focused on its own research, when they realized that “oh fuck, based on our current research things could get really bad unless someone does something”… and then they went Heroic Responsibility and spun out a whole new company to do something, rather than just pretending that no dangers existed or making vague noises and asking for government intervention or something.
It feels like being hostile toward them is a bit Copenhagen Ethics, in that if they hadn’t tried to do the right thing, it’s possible that nobody would have heard about this and things would have been much easier for them. But since they were thinking about their consequences of their research and decided to do something about it and said that in public, they’re now getting piled on for not answering every question they’re asked on X. (And if I were them, I might also have concluded that the other side is so hostile that every answer might be interpreted in the worst possible light and that it’s better not to engage.)