Slide 18 makes a point I often make when introducing people to the topic: the military’s policy assumption is that humans will always be in the loop, but in reality there will be constant pressure to pull humans out of the loop (see e.g. Arkin’s military-funded Governing Lethal Behavior in Autonomous Robots). The slide concludes: “In fact, exponential technological change is outpacing the ethical programming of unmanned technology.” Which is not far from the way I put it in Facing the Singularity: “AI safety research is in a race against AI capabilities research. Right now, AI capabilities research is winning, and in fact is pulling ahead. Humanity is pushing harder on AI capabilities research than on AI safety research.”
Geoff Anders just showed me this PowerPoint prepared by U.S. Air Force’s Center for Strategy and Technology, the same group that produced this bombastic ‘future of the air force’ video.
Slide 18 makes a point I often make when introducing people to the topic: the military’s policy assumption is that humans will always be in the loop, but in reality there will be constant pressure to pull humans out of the loop (see e.g. Arkin’s military-funded Governing Lethal Behavior in Autonomous Robots). The slide concludes: “In fact, exponential technological change is outpacing the ethical programming of unmanned technology.” Which is not far from the way I put it in Facing the Singularity: “AI safety research is in a race against AI capabilities research. Right now, AI capabilities research is winning, and in fact is pulling ahead. Humanity is pushing harder on AI capabilities research than on AI safety research.”