I have just seen this in curated, but I had previously commented on Zvi’s reporting on it.
Obviously, any nation state aware of the escalation ladder who wanted to be the first to develop ASI would put their AI cluster deep underground and air-gap it. We must not allow a mine shaft gap and all that. Good luck to their peer superpowers to actually conquer the hardened bunker.
Also, to MAIM, you have to know that you are in imminent danger. But with ASI nobody is sure when the point of fast takeoff—if there is any—might start. Is that cluster in that mine still trying to catch up to ChatGPT, or has it reached the point where it can do useful AI research and find algorithmic gains far beyond what humans would have discovered in a millennium? Would be hard to tell from the outside.
Emphasize terrorist-proof security over superpower-proof security.
Though there are benefits to state-proof security (SL5), this is a remarkably daunting task that is arguably much less crucial than reaching security against non-state actors and insider threats (SL3 or SL4).
This does not seem to have the least to do with super-intelligence. Daesh is not going to be the first group to build ASI, not in a world where US AI companies burn through billions to get there as soon as possible.
The Superintelligence Strategy paper mentions the 1995 Tokyo subway sarin attack, which killed 13 people. If anything, that attack highlights how utterly impractical nerve gas is for terrorist attacks. That particular group of crazies spent a lot of time on synthesizing a nerve gas (as well as a few other flashy plans) only for their their death toll being similar to a lone wolf school shooter or a someone driving a truck into a crowd. Even if their death toll was increased by an order of magnitude due to AI going “Sure, here are some easy ways to disperse Sarin in a subway carriage”, their attacks would still be pretty ineffective compared to more mundane attacks such as bombs or knives.
Basically, when DeepSeek released their weights (so terrorist groups can run it locally instead of foolishly relying on company hosted AI services where any question towards the productions of “WMD” would raise a giant red flag), I did not expect that this would be a significant boon for terrorists, and so far I have not seen anything convincing me of the opposite.
But then again, that paper seems to be clearly targeted towards the state security apparatus, and terrorists were the bogeyman of that apparatus since GWB, so it seems obvious to emphasize the dangers of AIs with “but what if terrorists use them” instead of talking about x-risks or the like.
I have just seen this in curated, but I had previously commented on Zvi’s reporting on it.
Obviously, any nation state aware of the escalation ladder who wanted to be the first to develop ASI would put their AI cluster deep underground and air-gap it. We must not allow a mine shaft gap and all that. Good luck to their peer superpowers to actually conquer the hardened bunker.
Also, to MAIM, you have to know that you are in imminent danger. But with ASI nobody is sure when the point of fast takeoff—if there is any—might start. Is that cluster in that mine still trying to catch up to ChatGPT, or has it reached the point where it can do useful AI research and find algorithmic gains far beyond what humans would have discovered in a millennium? Would be hard to tell from the outside.
This does not seem to have the least to do with super-intelligence. Daesh is not going to be the first group to build ASI, not in a world where US AI companies burn through billions to get there as soon as possible.
The Superintelligence Strategy paper mentions the 1995 Tokyo subway sarin attack, which killed 13 people. If anything, that attack highlights how utterly impractical nerve gas is for terrorist attacks. That particular group of crazies spent a lot of time on synthesizing a nerve gas (as well as a few other flashy plans) only for their their death toll being similar to a lone wolf school shooter or a someone driving a truck into a crowd. Even if their death toll was increased by an order of magnitude due to AI going “Sure, here are some easy ways to disperse Sarin in a subway carriage”, their attacks would still be pretty ineffective compared to more mundane attacks such as bombs or knives.
Basically, when DeepSeek released their weights (so terrorist groups can run it locally instead of foolishly relying on company hosted AI services where any question towards the productions of “WMD” would raise a giant red flag), I did not expect that this would be a significant boon for terrorists, and so far I have not seen anything convincing me of the opposite.
But then again, that paper seems to be clearly targeted towards the state security apparatus, and terrorists were the bogeyman of that apparatus since GWB, so it seems obvious to emphasize the dangers of AIs with “but what if terrorists use them” instead of talking about x-risks or the like.