By Erik Schechter, Live Science
Ask some technologists, and they’ll say that lethal autonomous weapons—machines that can select and destroy targets without human intervention—are the next step in modern warfare, a natural evolution beyond today’s remotely operated drones and unmanned ground vehicles. Others will decry such systems as an abomination and a threat to international humanitarian law (IHL) or the law of armed conflict.
The U.N. Human Rights Council has, for now, called for a moratorium on the development of killer robots. But activist groups like the International Committee for Robot Arms Control (ICRAC) want to see this class of weapon completely banned. The question is whether it is too early—or too late—for a blanket prohibition. Indeed, depending on how one defines “autonomy,” such systems are already in use.
From stones to arrows to ballistic missiles, human beings have always tried to curtail their direct involvement in combat, said Ronald Arkin, a computer scientist at the Georgia Institute of Technology. Military robots are just more of the same. With autonomous systems, people no longer do the targeting, but they still program, activate, and deploy these weapons.
“There will always be a human in the kill chain with these lethal autonomous systems, unless you’re making the case that they can go off and declare war like the Cylons,” said Arkin, referring to the warring cyborgs from Battlestar Galactica. He added, “I enjoy science fiction as much as the next person, but I don’t think that’s what this debate should be about at this point in time.”Peter Asaro, however, is not impressed with this domino theory of agency. A philosopher of science at The New School in New York, and co-founder of ICRAC, Asaro contends that robots lack “meaningful human control” in their use of deadly force. As such, killer robots would be taking the role of moral actors, a position that he doubts they are capable of fulfilling under international humanitarian law. That’s why, he says, these systems must be banned.