Sufficiently many Godzillas as an alignment strategy

Assuming that alignment by default happens with nontrivial probability, one way to produce human-aligned AGIs would be to simultaneously create sufficiently many (different) AGI. This leads to a multipolar scenario where there are a few AGI aligned with humans, and many unaligned AGI. (By unaligned I mean unaligned with humans, these AGI may or may not be aligned with some other goal.)

Although it is true that in general having many AGIs is not good, the idea is if some of those AGIs are aligned, this may be a better outcome than having just 1 AGI that is probably not aligned. Or perhaps, once there are enough aligned AGI, since they are all working in the same direction, they may be able to overcome the unaligned AGI (which are all working towards different goals that are likely not directly opposed to those of the aligned AGI). So it is useful to explore how the total number of AGI in this scenario impacts the chances of survival for humanity. To make this more concrete, I pose the following question (but feel free to discuss the idea in general):


Assuming that all of the AGIs have roughly the same cognitive power, rank the following scenarios from best to worst outcome for humanity.
A) 1 AGI, with a 10% of being aligned
B) 1 aligned AGI and 9 unaligned AGI
C) 10 aligned AGI and 90 unaligned AGI
D) 1000 aligned AGI and 9000 unaligned AGI