IMO, the “rapid takeoff” idea should probably be seen as a fundraising ploy. It’s big, scary, and it could conceivably happen—just the kind of thing for stimulating donations.
It seems that SIAI would have more effective methods for fundraising, e.g. simply capitalizing on “Rah Singularity!”. I therefore find this objection somewhat implausible.
I prefer this briefer formalization, since it avoids some of the vagueness of “adequate preparations” and makes premise (6) clearer.
At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast take-off)
This AI will maximize a goal function.
Given fast-take off and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.
Adequate controls will not be used. (E.g. Won’t box/boxing won’t work)
Therefore, the superintelligent AI will have a decisive advantage
Unless that AI is designed with goals that stably and extremely closely align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)
The AI will not be designed with goals that stably and extremely closely align with ours.
Therefore, civilization will be ruined shortly after fast take-off.