This specific scenario involves us making a second obvious mistake (putting the AI in charge of the military) vs. scenarios that just involve us creating AI, and then the AI does things of its own volition like escaping and creating bioweapons that are dangerous without us having to do anything at all.
OK. That seems to require AI hacking out of a box, which is unbelievable as per rule 4 or 8. Or do more mundane cases like AI doing economic transactions or research count?
This specific scenario involves us making a second obvious mistake (putting the AI in charge of the military) vs. scenarios that just involve us creating AI, and then the AI does things of its own volition like escaping and creating bioweapons that are dangerous without us having to do anything at all.
OK. That seems to require AI hacking out of a box, which is unbelievable as per rule 4 or 8. Or do more mundane cases like AI doing economic transactions or research count?