I agree that 1-2 logs isn’t really in the category of xrisk. The longer the lead time on the evil plan (mixing chemicals, growing things, etc), the more time security forces have to identify and neutralize the threat. So all things being equal, it’s probably better that a would be terrorist spends a year planning a weird chemical thing that hurts 10s of people, vs someone just waking up one morning and deciding to run over 10s of people with a truck.
There’s a better chance of catching the first guy, and his plan is way more expensive in terms of time, money, access to capital like LLM time, etc. Sure someone could argue about pandemic potential, but lab origin is suspected for at least one influenza outbreak and a lot of people believe it about covid-19. Those weren’t terrorists.
I guess theoretically, there may be cyberweapons that qualify as wmd, but those will be because of the systems they interact with. It’s not the cyberweapon itself, it’s the nuclear reactor accepting commands that lead to core damage.
I’d love a reply on this. Common attack vectors I read on this forum include 1. powerful elite bribes existing labs in US to manufacture bioweapons 2. nation state sets up independent biotech supply chain and starts manufacturing bioweapons.
This has been an option for decades, a fully capable LLM does not meaningfully lower the threshold for this. It’s already too easy.
This has been an option since the 1950s. Any national medical system is capable of doing this, Project Coast could be reproduced by nearly any nation state.
I’m not saying it isn’t a problem, I’m just saying that the LLMs don’t make it worse.
I have yet to find a commercial LLM that I can’t make tell me how to build a working improvised explosive (I can grade the LLMs performance because I’ve worked with the USG on the issue and don’t need a LLM to make evil).
I wrote about something similar previously: https://www.lesswrong.com/posts/Ek7M3xGAoXDdQkPZQ/terrorism-tylenol-and-dangerous-information#a58t3m6bsxDZTL8DG
I agree that 1-2 logs isn’t really in the category of xrisk. The longer the lead time on the evil plan (mixing chemicals, growing things, etc), the more time security forces have to identify and neutralize the threat. So all things being equal, it’s probably better that a would be terrorist spends a year planning a weird chemical thing that hurts 10s of people, vs someone just waking up one morning and deciding to run over 10s of people with a truck.
There’s a better chance of catching the first guy, and his plan is way more expensive in terms of time, money, access to capital like LLM time, etc. Sure someone could argue about pandemic potential, but lab origin is suspected for at least one influenza outbreak and a lot of people believe it about covid-19. Those weren’t terrorists.
I guess theoretically, there may be cyberweapons that qualify as wmd, but those will be because of the systems they interact with. It’s not the cyberweapon itself, it’s the nuclear reactor accepting commands that lead to core damage.
I’d love a reply on this. Common attack vectors I read on this forum include 1. powerful elite bribes existing labs in US to manufacture bioweapons 2. nation state sets up independent biotech supply chain and starts manufacturing bioweapons.
https://www.lesswrong.com/posts/DDtEnmGhNdJYpEfaG/joseph-miller-s-shortform?commentId=wHoFX7nyffjuuxbzT
This has been an option for decades, a fully capable LLM does not meaningfully lower the threshold for this. It’s already too easy.
This has been an option since the 1950s. Any national medical system is capable of doing this, Project Coast could be reproduced by nearly any nation state.
I’m not saying it isn’t a problem, I’m just saying that the LLMs don’t make it worse.
I have yet to find a commercial LLM that I can’t make tell me how to build a working improvised explosive (I can grade the LLMs performance because I’ve worked with the USG on the issue and don’t need a LLM to make evil).
Makes sense, thanks for replying.