My intuitive feeling is that this makes too many unscientific assumptions that it does concretely support. The main issue I have is the dismissal of the techniques to kill off humanity. While it is true that it is by definition impossible to try to reason as a superhuman entity it is still the case that a rogue AI would only have a limited set of tools to try and cause a mass extinction. Wouldn’t the first step be to cite a counterexample strategy that an AI could use?
The authors think these are not existential only because “we have concluded that a single nuclear winter is unlikely to be an extinction risk”. Hardly comforting and, as argued above, existentially unconvincing.
I fail to see exactly where this is argued above, loss of control does not make wiping Earth of humanity easier with nukes, but it also moves the goalpost. RAND has deliberately chosen a limited scope that focuses on extinction level techniques, and I don’t think that they are trying to offer any comfort on the horrors of nuclear war.
While I am critical of the applicability of AI research on robotics, I agree that RAND’s assumption that robots need much more development to be used for spreading pathogens is probably wrong. It is likely that drone technology has already achieved such a level, and that an advanced AI could control a global swarm of them—either directly or through human actors—to spread disease together with some sort of supply-chain attack.
The problem here is caused by the initial split of the “free software” term and community. A is entierly correct in that X is open source, but since it does not follow the four freedoms (to run for any purpose, to change the program, to redistribute copies, and to redistribute your modified copies) it is not free (as in freedom) software. Notice that none of the above explicity require access to the source code, but it is in practice a requirement to fulfill the freedom to make changes.
This is already one of the reasons that RMS himself opposes the open source term: