AI will not kill everyone without sequential reasoning.
This statement might be literally true, but only because of a loophole like “AI needs humans to help it kill everyone”. Like we’re probably not far away from, or may already have, the ability to create novel biological weapons, like engineered viruses, that could kill all humans before a response could be mustered. Yes, humans have to ask the LLM to help it create the thing and then humans have to actual do the lab work and deployment, but from an outside view (which is especially important from a policy perspective), this looks a lot like “AI could kill everyone without sequential reasoning”.
I generally think that [autonomous actions due to misalignment] and [human misuse] are distinct categories with pretty different properties. The part you quoted addresses the former (as does most of the post). I agree that there are scenarios where the second is feasible and the first isn’t. I think you could sort of argue that this falls under AIs enhancing human intelligence.
This statement might be literally true, but only because of a loophole like “AI needs humans to help it kill everyone”. Like we’re probably not far away from, or may already have, the ability to create novel biological weapons, like engineered viruses, that could kill all humans before a response could be mustered. Yes, humans have to ask the LLM to help it create the thing and then humans have to actual do the lab work and deployment, but from an outside view (which is especially important from a policy perspective), this looks a lot like “AI could kill everyone without sequential reasoning”.
I generally think that [autonomous actions due to misalignment] and [human misuse] are distinct categories with pretty different properties. The part you quoted addresses the former (as does most of the post). I agree that there are scenarios where the second is feasible and the first isn’t. I think you could sort of argue that this falls under AIs enhancing human intelligence.