I think you get very different answers depending on whether your question is “what is an example of a policy that makes it illegal in the United States to do research with the explicit intent of creating AGI” or whether it is “what is an example of a policy that results in nobody, including intelligence agencies, doing AI research that could lead to AGI, anywhere in the world”.
For the former, something like updates to export administration regulations could maybe make it de-facto illegal to develop AI aimed at the international market. Historically, that was successful at making it illegal to intentionally export software which implemented strong encryption for a bit. It didn’t actually prevent the export, but it did arguably make that export unlawful. I’d recommend reading that article in full, actually, to give you an idea of how “what the law says” and “what ends up happening” can diverge.
I think “doesn’t fully understand the concept of superradiance” is a phrase that smuggles in too many assumptions here. If you rephrase it as “can determine when superradiance will occur, but makes inaccurate predictions about physical systems will do in those situations” / “makes imprecise predictions in such cases” / “has trouble distinguishing cases where superradiance will occur vs cases where it will not”, all of those suggest pretty obvious ways of generating training data.
GPT-4 can already “figure out a new system on the fly” in the sense of taking some repeatable phenomenon it can observe, and predicting things about that phenomenon, because it can write standard machine learning pipelines, design APIs with documentation, and interact with documented APIs. However, the process of doing that is very slow and expensive, and resembles “build a tool and then use the tool” rather than “augment its own native intelligence”.
Which makes sense. The story of human capabilities advances doesn’t look like “find clever ways to configure unprocess rocks and branches from the environment in ways which accomplish our goals”, it looks like “build a bunch of tools, and figure out which ones are most useful and how they are best used, and then use our best tools to build better tools, and so on, and then use the much-improved tools to do the things we want”.