By Brianna Starosciak
A new wave of “autonomous” weapons technology is on the horizon and many countries are discussing ways in which that new technology may be used in future military conflicts—and whether new rules to manage the risks posed those weapons are in order.
While lethal autonomous weapons systems are not deployed on the battlefield yet, their semi-autonomous precursors, such as drones, have been around for years. It is only a matter of time before their autonomous descendants join them on the battlefield.
There is no definition of lethal autonomous weapons systems under the Convention on Certain Conventional Weapons (CCW), which has been a key mechanism used to address new weapons types. A May 13-16 informal meeting of CCW states parties determined that since lethal autonomous weapons systems are in their nascent stage and could evolve in many different ways, it is too early establish a definition for these weapons.
However, there are some key characteristics of lethal autonomous weapons systems that differentiate them from other types of weapons, the most important of which is that these robotic systems would be able to select a target and attack without any human intervention. In contrast, the unmanned aerial vehicles (a.k.a. drones) that are widely used today still require human intelligence to identify and engage targets.
Proponents of lethal autonomous weapons systems argue that they provide a number of useful characteristics, including: continuous operation for long periods of time in hostile environments, they are impervious to potential chemical and biological attack, they can communicate swiftly, and could record any potentially unethical or nefarious behavior in their zone of operation. Most importantly, proponents argue, the deployment of lethal autonomous weapons systems could keep more soldiers out of the line of fire.
On the other hand, a growing number of critics, such as Human Rights Watch and the Women’s International League for Peace and Freedom, highlight significant issues of concern surrounding the development and potential use of lethal autonomous weapons systems. Some of these issues include their vulnerability to hacking, the potential for lowering the threshold of conflict, difficulty identifying friend and foe, the need to develop a code of ethics to guide their use in warfare, and the need to enforce rules governing state use of such weapons systems. Much farther into the future, critics note, there could be issues of robot self-awareness. At that point, lethal autonomous weapons systems could draw conclusions about their own status in society and the moral judgments of fellow soldiers or other human beings.
Furthermore, there is a moral dilemma about whether fully autonomous weapons systems should be allowed to decide the fate of human beings, even if those systems are built and programmed by human beings. This would represent a fundamental shift from warfare in the past where human beings are held accountable for their decisions and actions in war zones. The international community needs to wrestle with this moral dilemma before these systems are deployed.
A number of international security and humanitarian law experts also argue that lethal autonomous weapons systems have the potential to lower the threshold of conflict. If lethal autonomous weapons systems become a major component of war-fighting in the future, they say, political and military leaders may be more likely to engage in military action because their own soldiers will not be at risk.
A second point is that even if lethal autonomous weapons systems can be programmed in a way that attempts to respect the rights of noncombatants and the rules of war, there is no guarantee that every country will design and deploy operate them in that manner. A state that has committed human rights abuses in the past will probably not have scruples about continuing to do so with lethal autonomous weapons systems.
Two countries of particular concern are Russia and China. Both states have a history of human rights abuses and are currently developing lethal autonomous weapons systems. The Russian and Chinese programs highlight the importance of the U.S. taking a leading role in pursuing rule of the road for lethal autonomous weapons systems and/or a legally-binding regime banning the development of certain kinds of lethal autonomous weapons systems in order to head off a new and destabilizing arms race.
Before a wave of lethal autonomous weapons systems are deployed, countries need to take the time now to decide if they want to give full control of targeting and executing humans to a machine. This moral issue, coupled with concerns about lowering the threshold of conflict and human rights abuses, should give the international community more than enough reason to pause.
Steps such as the CCW’s informal May 13-16 meeting are a good start, but lethal autonomous weapons systems still need to be defined, and the international community needs to explore effective ways to regulate them. If we wait to have serious discussions on lethal autonomous weapons systems until they are being manufactured or deployed, it will already be too late.