My point is not that there is a direct link between adversarial robustness and taking over the world, but that the lack of adversarial robustness is (inconclusive) evidence that deep learning is qualitatively worse than human intelligence in some way (which would also manifest in ways other than adversarial examples). If the latter is true, it certainly reduces the potential risk from such systems (maybe not to 0, but it certainly substantially weakens the case for the more dramatic take-over scenarios).
My point is not that there is a direct link between adversarial robustness and taking over the world, but that the lack of adversarial robustness is (inconclusive) evidence that deep learning is qualitatively worse than human intelligence in some way (which would also manifest in ways other than adversarial examples). If the latter is true, it certainly reduces the potential risk from such systems (maybe not to 0, but it certainly substantially weakens the case for the more dramatic take-over scenarios).