[Question] What are the most plausible “AI Safety warning shot” scenarios?

A “AI safety warning shot” is some event that causes a substantial fraction of the relevant human actors (governments, AI researchers, etc.) to become substantially more supportive of AI research and worried about existential risks posed by AI.

For example, suppose we build an unaligned AI system which is “only” about as smart as a very smart human politician, and it escapes and tries to take over the world, but only succeeds in taking over North Korea before it is stopped. This would presumably have the “warning shot” effect.

I currently think that scenarios like this are not very plausible, because there is a very narrow range of AI capability between “too stupid to do significant damage of the sort that would scare people” and “too smart to fail at takeover if it tried.” Moreover, within that narrow range, systems would probably realize that they are in that range, and thus bide their time rather than attempt something risky.

EDIT: To make more precise what I mean by “substantial:” I’m looking for events that cause >50% of the relevant people who are at the time skeptical or dismissive of existential risk from AI to change their minds.