Intelligence risk and distance to endgame

There are at least three objections to the risk of an unfriendly AI. One is that uFAI will be stupid—it is not possible to build a machine that is much smarter than humanity. Another is that AI would be powerful but uFAI is unlikely—the chances of someone building something that turn out malign, either deliberately or accidentally, is small. Another one that I haven’t seen articulated, is the AI could be malign and potentially powerful, but effectively impotent due to its situation.

To use a chess analogy: I’m virtually certain that Deep Blue will beat me at a game of chess. I’m also pretty sure that a better chess program with vastly more computer power would beat Deep Blue. But, I’m also (almost) certain that I would beat them both at a rook and king vs king endgame.

If we try to separate out the axes of intelligence and starting position, where does your intuition tell you the danger area is ? To illustrate, what is the probability that humanity is screwed in each of the following ?

1) A lone human paperclip cultist resolves to convert the universe (but doesn’t use AI).

2) One quarter of the world has converted to paperclip cultism and war ensues. No-one has AI.

3) A lone paperclip cultist sets the goal of a seed AI and uploads it to a botnet.

4) As for 2) but the cultists have a superintelligent AI to advise them.