[Note: I’m an AGI skeptic, at least compared to the median LW poster. I expect decades of improving tool-AI that never quite reaches human levels of out-of-domain judgement and exception handling. ]
There’s really not much advantage to pure automation over AI-assisted humans (and as they get more advanced, human-oversight of AI). And there’s a lot of downsides, especially around adversarial manipulation of environment and ECM/capture to turn weapons against their own side. The very things that make self-contained AI weapons better than remote pilots (or at least remote weapons-unlock) are the things that make it terrifying to the users, not just the targets.
There’s an interesting parallel to landmines—they’re just too indiscriminate to be terribly effective on today’s battlefields. AI drones don’t have the problem of extending danger past the conflict (unless they do), but there’s a LONG way before they’re better than humans at deciding whether to take the shot.
[Note: I’m an AGI skeptic, at least compared to the median LW poster. I expect decades of improving tool-AI that never quite reaches human levels of out-of-domain judgement and exception handling. ]
There’s really not much advantage to pure automation over AI-assisted humans (and as they get more advanced, human-oversight of AI). And there’s a lot of downsides, especially around adversarial manipulation of environment and ECM/capture to turn weapons against their own side. The very things that make self-contained AI weapons better than remote pilots (or at least remote weapons-unlock) are the things that make it terrifying to the users, not just the targets.
There’s an interesting parallel to landmines—they’re just too indiscriminate to be terribly effective on today’s battlefields. AI drones don’t have the problem of extending danger past the conflict (unless they do), but there’s a LONG way before they’re better than humans at deciding whether to take the shot.