What is the best compact formalization of the argument for AI risk from fast takeoff?

Many people complain that the Singularity Institute’s “Big Scary Idea” (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers’ argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about the strength of those premises and inferences.

Here is one way you could construe one version of the argument for the Singularity Institute’s “Big Scary Idea”:

  1. At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast takeoff)

  2. This AI will maximize a goal function.

  3. Given fast takeoff and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used.

  4. Adequate controls will not be used. (E.g. Won’t box/​boxing won’t work)

  5. Therefore, the superintelligent AI will have a decisive advantage

  6. Unless that AI is designed with goals that stably align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary)

  7. Unless the first team that develops the superintelligent AI makes adequate preparations, the superintelligent AI will not have goals that stably align with ours.

  8. Therefore, unless the first team that develops the superintelligent AI makes adequate preparations, civilization will be ruined shortly after fast takeoff

  9. The first team that develops the superintelligent AI will fail to make adequate preparations

  10. Therefore, civilization will be ruined shortly after fast takeoff.

Edit to add: premises should be read as assuming the truth of all above premises. E.g., (9) is assuming that we’ve created an artificial agent with a decisive advantage.

My questions are:

  • Have I made any errors in the argument structure?

  • Can anyone suggest an alternative argument structure?

  • Which of these premises seem the weakest to you?