As you’re watching ‘The Terminator’ or ‘The Matrix’ one thing comes to mind: this can’t be real. But even though it won’t look like the Terminator there will indeed be autonomous machines in warfare, deciding themselves who’s a target and who’s not. As technology is advancing at a tremendous speed, fictional worlds are becoming less and less surreal.
By Maarten van Bijnen – intern at IKV Pax Christi
At this very moment, American soldiers are striking targets in Pakistan from military bases in the U.S. just by pushing buttons and handling joysticks. The so-called drones they use, unmanned aerial remote control vehicles, have been a major success for the U.S. Army, since it’s a way of tracking down and eliminating al-Qaeda members without risking American lives. This new kind of warfare doesn’t stop there. Countries like the U.S., China, Germany, Israel, Russia, the United Kingdom, India and South Korea are showing interest in the development of fully autonomous killer robots. The U.S. Department of Defence Directive of November 2012 shows that the U.S. is actually working on the development of Killer Robots. With a fast appearing robotic arms race, leading robotics and artificial intelligence expert prof. dr. Noel Sharkey says that autonomous robots will possibly be on battlefields within a decade.
Like the (in)famous drones, there’s a wide variety of semi-autonomous robots in usage at the moment. For instance, the U.S. Navy uses a defence system called the ‘Phalanx’, which has the capability of tracking down and destroying incoming missiles. The only human control is the ‘order’ given to eliminate the target. When a person gives the command, a machine gun fires 4500 rounds per minute at the object. So there’s only ‘supervised automation’. Another ‘almost autonomous’ killer robot is the ‘Samsung Techwin surveillance and security guard robot’. These ‘guards’ are placed in the demilitarized zone between North and South Korea. They automatically track targets by using sensors. Despite the fact that these robots need human verification to eliminate objects, they do have an ‘automatic mode’, which would make them fully autonomous. The research wing of the Pentagon is also developing a new kind of drone, the X47-B. This unmanned plane will be so manoeuvrable that its supersonic twist creates enormous G-force, making it physically impossible for humans to fly it .
Ever since mankind developed the first weapons, there’s been a clear-cut drive towards distancing oneself from enemies. From bow and arrow to catapults, from tanks to drones: humans are constantly thinking of creative ways to kill their adversaries without having to see their faces. Of course, it’s a way of staying alive, but it also makes the moral decision to kill a lot easier. Nowadays a soldier only has to push a button ‘et voilà’ the job is done. The only thing that’s giving that soldier a feel of moral involvement is the word ‘attack’ or the pull of a trigger to drop a bomb 4000 miles away. Removing that trigger or human supervision , as will be the case with Killer Robots, is the next step towards eliminating morality. And it’s exactly this point that scares experts.
Sharkey says that fully autonomous weapons will give way to an unregulated environment, where moral implications and international law will be in serious danger. The problem, according to Sharkey, is that there’s “no mechanism in a robot’s ‘mind’ to distinguish between a child holding up a sweet and an adult pointing a gun”. Failure of such mechanisms creates serious problems, because innocent civilians could become targets. Drone warfare has shown that distancing humans from the battlefield can be fatal to civilians. But those are mistakes made by human operators. If a human being with real knowledge of context isn’t capable of distinguishing between combatants and civilians, how can a piece of metal really be any good?
This is exactly why non-governmental organizations like Human Rights Watch and IKV Pax Christi have formed an international coalition to strive for a pre-emptive ban on fully autonomous weapon systems. Their main point is that autonomous weapons remove human judgment from the battlefield, which makes civilians highly vulnerable for attacks. The coalition Campaign to Stop Killer Robots also highlights the accountability gap the use of robots would create. The question would be: who will be responsible for violations of international laws when a robot attacks a target autonomously? Could a nation say that it wasn’t their intention, but that it was the machine’s own decision? Would the robot then go to robot jail or would an engineer be responsible for the robot’s law breaking?
In May 2013 UN Special Rapporteur Christof Heyns presented his report on ‘lethal autonomous robots’ (LARs) during a meeting of the UN Human Rights Council in Geneva. In his report Heyns calls for ‘a worldwide moratorium on the testing, production and use of killer robots’ until the international community can lay down rules for their use. According to Heyns a killer robot wouldn’t act out of vengeance or anger, which is an asset of their use. The machine is emotionless, so it won’t intentionally torture and rape people –unless, of course, it’s the engineer’s intention the robot does. The main problem Heyns mentions, is that humans would be more disconnected from the actions taking place during warfare. It will lead to ‘robotization’ of mankind where empathy will eventually fade away.
The U.S., Russia, China and Israel are highly interested in new technologies, amongst other things because the killer robot industry can potentially become a multibillion-dollar industry. And as we know: money is a huge motivator. Such tendencies make the realisation of an international ban or regulation on the robot production and its use hard to accomplish, but that doesn’t mean protesters will lay down their arms. A lot of them have a positive outlook that their campaign will eventually pay off. Nobel Peace Prize Laureate Jody Williams, who became famous for her campaigning role in banning landmines, says that striving for a ban before the killer robots are developed and deployed can be highly effective. She mentions the international embargo on anti-personnel landmines, which came into being by pushing politicians before the weapons were in use.
‘Humans need to get ahead of this technology before it gets ahead of them’, is the main message of experts and human rights organizations. Not only would those machines reduce morality and humanity on the battlefield and make civilians more vulnerable, it would also create more power for leaders with bad intentions. Director Tom Malinowski of Human Rights Watch Washington says that human beings have inherited limits, where morality, empathy and fear could eventually prevail. So when a dictator like Bashar al-Assad trusts on human soldiers, he knows there are boundaries. Even his troops ‘have a breaking point’. But killer robots don’t. If someone like Assad had a killer robot army, no protester would be safe. It would give him unlimited power. He could do whatever he pleases, without having to worry about angry mobs –let alone a system like ‘democracy’. Because machines do not get tired they could search and destroy instantly. Taking these facts into account, let’s take the wisdom of Aristotle for granted. ‘What it lies in our power to do, it lies in our power not to do’.