This has far too many assumptions, and the final claims are too strongly stated. One of the important assumptions being that it can trivially destroy humanity without trying. Comparing humanity to a specific kind of flower is ridiculous. Even a strongly superhuman AI would be more like humans vs rats than humans vs. a flower. Could humanity eliminate all rats in the world if we wanted to? Maybe, but we wouldn’t accomplish much of anything else.
Another assumption is that whatever is overseeing the AI is vastly stupider than the AI, even with all the tools made for the purpose. If you can make a superhuman AI, you can make a superhuman AI and Human system too (which is an easier task).
I feel I should mention an important implementation detail. If you do not know the range of possible values, it is often a good idea to check, as most generative processes only use a subset. This is also a linear time process, that only takes two values in memory, but does involve some branching. It is better to just know and put that in, but the ability to do this makes it a much more practical algorithm.