[Question] Is there anything that can stop AGI development in the near term?

Assume that short timeline arguments are correct. (For previous discussion, see “What if AGI is near?”.)

Some possible ideas:

  • A multilateral, international governmental/​legal agreement to halt GPU production or ban AI research

    • Surveillance systems that detect when someone is about to launch an AI system and report them to the authorities. Obviously this would be just an implementation detail of the above idea.

  • An agreement among prominent AI researchers (but not necessarily governments) in multiple countries that further progress is dangerous and should be halted until alignment is better understood

  • A global nuclear war or some other disaster that would halt economic progress and damage supply chains

    • Nuclear EMPs that would damage many electrical systems, possibly including computing hardware, while limiting casualties

Even in these scenarios it seems like further progress could be possible as long as at least one research group with access to sufficient hardware is able to scale up existing methods. So I’m just curious, would it even be possible to stop AI research in the near term? (This is different from asking whether it would be good to do it—there are obviously reasons why the ideas above could be quite terrible.)

Also, should we expect that due to anthropic/​observer selection effects, we will find ourselves living in a world where something like the dystopian scenarios discussed above happens, regardless of how unlikely it is a priori?