Is this a weak pivotal act: creating nanobots that eat evil AGIs (but nothing else)?

I’ve seen the phrase “there are no weak pivotal acts” pretty often, but I have not been able to locate where this is explained.

The prototypical example of a strong pivotal act is “nanobots that eat GPUs” (my understanding is that this is meant to be an oversimplified example). So for example, what would make “nanobots that eat evil AGIs” not a weak pivotal act:

  1. It actually is a weak pivotal act, and I’m right and everyone else is wrong. (If only, right?)

  2. It is not weak because weakness means being passive in some way, and detecting and eating evil AGIs is too active.

  3. Even if the nanobots only eat evil AGIs, it’s not weak because it’s technically illegal. (Note that I think other AI labs would actually like the evil AGI eating nanobots because it means they don’t need to worry about safety anymore. They can create any type of AI other than evil AGI, and if they make one accidentally the nanobots eat it before their evil AGI kills them.)

  4. Even an aligned super-intelligence couldn’t safely accomplish this act accurately (for example, the nanobots might accidentally eat non-AGIs).

  5. Something else?