What are the flaws in this AGI argument?

Aligned AGI is a large scale engineering task

Humans have never completed at large scale engineering task without at least one mistake

An AGI that has at least one mistake in its alignment model will be unaligned

Given enough time, an unaligned AGI will perform an action that will negatively impact human survival

Humans wish to survive

Therefore, humans ought not to make an AGI until one of the above premises changes.


This is another concise argument around AI x-risk. It is not perfect. What flaw in this argument do you consider the most important?