If what the bad actor is trying to do with the AI is just get a clear set of instructions for a dangerous weapon, and a bit of help debugging lab errors… that costs only a trivial amount of inference compute.
In the paper, not letting weaker actors get access to frontier models and too much compute is the focus of Nonproliferation chapter. The framing in the paper suggests that in certain respects open weights models don’t make nearly as much of a difference. This is useful for distinguishing between various problems that open weights models can cause, as opposed to equally associating all possible problems with them.
If what the bad actor is trying to do with the AI is just get a clear set of instructions for a dangerous weapon, and a bit of help debugging lab errors… that costs only a trivial amount of inference compute.
In the paper, not letting weaker actors get access to frontier models and too much compute is the focus of Nonproliferation chapter. The framing in the paper suggests that in certain respects open weights models don’t make nearly as much of a difference. This is useful for distinguishing between various problems that open weights models can cause, as opposed to equally associating all possible problems with them.