I will posit that we can actually reduce civilian casualties through the use of [autonomous lethal robots], just as we do with precision-guided munitions, if… the technology is developed carefully. Only under those circumstances should the technology be released into the battlefield...
I am not a proponent for lethal autonomous robots… The question is… if we are in a war, how can we ensure these systems behave appropriately?
...I am not averse to a ban...
...Part of [what drew] me into this discussion is that I’ve been part of the development of [autonomous robots] for 25 years, so I have to bear some of the responsibility for the advent of this technology...
Atrocity is an inherently human behavior, one that does not have to be replicated in a robotic system...
I’ve argued… for a moratorium as opposed to an outright ban. We need to take the time to think about what we’re defining, what we’re banning… we need to be able to determine: “Can research ultimately reduce human casualties in the battlespace?” To me that’s a consequentialist argument. If we can save lives through the use of this technology… [then] there is a moral imperative for them to be used.
This is a research hypothesis, it is not a definitive statement. I am optimistic that this level of performance can be achieved, for two reasons: one is [that]… machines are getting stronger and smarter and more capable than humans are. If you look at human performance in the battlefield, that is a relatively low bar...
But until we can [design robotic platforms that can reduce human casualties], we have to be circumspect about how we move forward. And that requires a pause. We need to be able to investigate whether this is a feasible solution.
Google does this for the same reason, by the way. Google argues that human drivers are the most dangerous thing on the road, and if you want to save lives, we’ve got to get people out of cars and get robots driving them.
...Assuming wars will continue… what is the appropriate role of technology.… Nations all across the world… use [this technology] for force multiplication… to extend the fighter’s reach, [etc.], but there has been [almost no] work on the question of how we can reduce noncombatant casualties.
Recently, Human Rights Watch… has come out with a call for a ban… along with many other NGOs… Shortly thereafter… coincidentally, the US Department of Defense mandated restrictions on the development of these things in what I call a “quasi-moratorium”, where certain classes of these systems are not to be developed for at least 10 years, and in 5 years we’ll revisit whether 10 years is enough...
People are correctly saying “We need to examine this, we cannot blindly go forward”...
...My underlying research thesis… is not a short-term research agenda, and it requires a substantial research effort… by a whole community… that robots can ultimately be more humane than human beings in military situations. They will never be perfectly ethical. They will make mistakes. But if they make fewer mistakes than human warfighters do, that translates into the saving noncombatant lives.
Ron Arkin, author of Governing Lethal Behavior in Autonomous Robots, on autonomous lethal robots: