If you want to kill modern AI using existing law and have friends in the correct government offices, it should be fairly straightforward to do so without new law
This legal category is very aggressively defined: https://en.wikipedia.org/wiki/Restricted_Data
It was written to mean ‘if someone draws a working design for a nuclear bomb or certain kinds of nuclear material production equipment anywhere, that data is a state secret, regardless of the source of the information used to produce it’. This is commonly referred to as ‘born classified’. There are a good 70+ years of arguments about whether this is a good law, but that is the law.
Therefore, here is your process:
-find an AI model that you reasonably believe is capable of outputting something the government will view as a classified fact related to nuclear weapon design. Edit: you should probably build it yourself by either training from scratch or fine-tuning an open model.
-send the model weights, installation instructions, and a letter to the DOE Office of Classification requesting that they determine that your model is NOT restricted data. Offer to send them hardware to run the model (you won’t get it back). I am familiar with this classification regime, documents are not the only things that can be marked restricted data, physical embodiments (sculpture or actual objects) or computer programs (math models) are classified with it.
They can either determine that your model contains restricted data (if the model can invent novel tech, it should be able to figure out 80 year old tech so this might not be a hard bar to jump), determine that it does not contain restricted data (in which case you should send them a bunch of outputs that look bad and see what they say), or determine that the class of material (model) cannot be judged under the law.
Since the model is a unitary object, it will be quite hard to separate ‘these specific weights are where the restricted data lives’ from the rest, so suddenly frontier models will become, through the stroke of a bureaucrat’s pen, state secrets.
Edit: get someone with a current or former Q clearance to submit the model as their own work if you want to add ‘it would be ok for this to be published, but not by you’ to the list of possible outcomes. That would mean that an AI researcher who wants to credibly take themselves out of AI research permanently can simply acquire a Q-cleared job (LLNL is near the bay) at some point. The possible positive (for OP) outcomes are 1) the US bureaucracy has a reason to slam the door on AI research globally in the name of counterproliferation 2) there is a nunn-lugar type path for researchers to make a living without working on dangerous capabilities (or working slowly only within the government, which is conservative, cost constrained, and now staffed with people who wanted to make safety their mission). If you seize power, you don’t need new legislation on safety, you only need some bureaucrats to choose to enforce the rule, plus potentially extra funding for military industrial complex contractor jobs.
Arguments of law in this context appear to me to be less important than arguments of power, but...if law matters, here is a law you can use, I guess?
The thread that comment came from was contentious, I got a lot of pushback here and elsewhere during the early GPT days for my opinion that transformers would be able to output interesting math.
Two years later when 3.5 was out, I felt that my ‘interesting’ threshold had been crossed and I had been technically correct, but was still hearing the same arguments. I’m happy that six years on, we have proof that my assessment of the potential of transformers, which to be clear, was absolutely viewed as ‘evidence that this person is crazy in a way that makes me want to avoid him’, was close to accurate.
From a meta perspective, this post is probably not helping me appear sane.