The development of autonomous machines such as robots has brought great benefits to manufacturing. For jobs that require high levels of accuracy and repeatability or are simply too dangerous or boring for people to carry out, robots in, say, a car plant offer a welcome solution.
But increasing reliance on automated, or semi-autonomous, technology in other areas is giving some engineers cause for concern. One of these is in defence, where newspaper headlines frequently highlight strikes made by unmanned aerial vehicles (UAVs) or drones against targets on the Afghan-Pakistani border. These technological marvels, seemingly unerringly precise, have allowed US and British forces to engage in asymmetric warfare – that is, countering a terrorist-type threat – in previously unheard of ways. But some experts are questioning the morals of using robot-like systems to make these strikes – and they are worried about the path the technology may lead us down.
Among these voices is British academic Noel Sharkey, an expert on robotics from the University of Sheffield’s department of computer science who co-founded a pressure group, the International Committee for Robot Arms Control (ICRAC), last year. The organisation recently brought together experts from all over the world and representatives from government, the defence industry and humanitarian groups for a conference in Berlin where the issue of the ethics and proliferation of automated defence systems was discussed.
Professor Sharkey, as his position would suggest, is no Luddite. “The actual technology of say, UAVs, is nothing short of miraculous,” he says. “I admire it as an engineer, and it’s doing robotics a lot of good.”
But he is worried that the ethical dimensions of using systems such as unmanned drones to attack insurgents in Afghanistan are not being given due consideration. Such attacks may, he believes, contravene the rules of war and the principles of internationally accepted humanitarian legislation such as the Geneva Convention. Further, this form of warfare is already pitching countries into a new arms race, with many nations either buying or developing UAVs for military purposes.
Sharkey explains: “The cornerstone of the Geneva Convention is the principle of distinction, which means that a weapon has to be able to discriminate between a friend and a foe. Another key principle is proportionality – any collateral damage must be proportional to the military advantage gained.”
He believes that UAV and drone attacks are causing unacceptable levels of innocent casualties even as the final judgments on whether to attack are being made by human commanders at, say, an air force base thousands of miles away in the Nevada desert. He argues that increasing the autonomy of such systems is a stated desire of the military and that the development of completely autonomous military robots (such as, for example, BAE’s Taranis system) on the ground, in the sea or in the air can only compound the ethical dilemma.
“Proportionality is notoriously difficult to judge – there’s no metric for it and it’s up to a human commander to make that call,” he says. “My worry is that no robot or software can do that: it can’t distinguish between combatant and civilian.” Mistakes, he notes, are already being made with humans making the final call.
Professor Alan Winfield of the University of the West of England is another expert on robotics and is carrying out pioneering research into developing machines that can work comfortably alongside humans [see box]. But he is well aware of the limitations of robotics. “The idea that you can develop a machine that can distinguish between friend and foe is fantasy,” he argues. “In the fog of war, even human soldiers have enormous difficulty sometimes making that judgment. And our technology is not good enough to be able to tell the difference between a child with a toy and a youth with a weapon.”