CCW Report, Vol. 6, No. 8
Human control for human rights
29 August 2018
Ray Acheson | Reaching Critical Will of WILPF
Tuesday’s discussions at UN talks on autonomous weapons moved to the Human Rights Council. This is significant, given that the UN began consideration of this issue in the Council in 2013 on the basis of the first report on the issue by Christof Heyn, the Special Rapporteur on extrajudicial, summary or arbitrary executions. The beige walls (and floors, and desks, and chairs) of the usual “disarmament” meeting room disappeared two floors beneath us, to be replaced by the famously vibrant ceiling created by Spanish abstract artist Miquel Barceló. “All of it is a sea upside down, but it is also a cave,” Barceló said about his ceiling. “The complete union of opposites, the ocean surface of the Earth and its most concealed cavities.” How fitting for discussions about killer robots, which are being developed quietly by a handful of countries while the world’s governments come together in a room dedicated to “Human Rights and the Alliance of Civilizations” to wrestle with the philosophical, ethical, technical, and legal questions about the increasing mechanisation of violence and the further removal of human beings from accountability for the use of force.
Appropriately, Tuesday’s focused primarily on the concept of human control—which many governments and activists tie directly to the protection of human rights and humanitarian law. Over the course of the last five years, the belief that meaningful human control must be maintained over the critical functions of weapon systems has emerged more or less as a point of consensus amongst participating governments. The question for most states is not if they “have a duty to control or supervise the development and/or employment of autonomous weapon systems, but how that control or supervision ought to be usefully defined and extended,” as the Swiss delegation said.
Differences in opinion remain in regards to what constitutes “meaningful” control and what limits on autonomy are required to ensure this control. On Tuesday delegations continued their examination of what stages of a weapon’s life cycle is human control or intervention necessary. In April the Chair released a pie-chart (now apparently affectionately referred to as the “sunrise” diagram) indicating the potential phases in which human control could be relevant in the life of a weapon system. These phases include research and development; testing, evaluation, verification, validation, and review; deployment, command, and control; and use and abort. Some delegations have since suggested additional phases; the UK working paper for this session, for example, adds “national policies” and “battle damage assessment / lessons learned” to the beginning and end of the sunrise.
Selection and engagement of targets seems to be the most common definition of critical functions of a weapon system that require human control. Some, like Japan, thought that autonomy in selecting targets would be fine, but that human control is necessary to initiate an attack. Others believe humans must control both in order to ensure the protection of human dignity and compliance with international law.
Some states have also expressed concern with processes in the development stages of these weapons. Ireland expressed concern about bias in the programming of a weapon system, highlighting the potential for the perpetuation and amplification of social bias, including gender bias, at the programming stage. The International Committee of the Red Cross (ICRC) has argued that humans must maintain control over programming, development, activation, and operational phases of a weapon system, because international humanitarian law “requires that those who plan, decide upon and carry out attacks make certain judgments in applying the norms when launching an attack.”
Overall, there seems to be convergence around the idea that fully autonomous weapons would not be acceptable, because they would not be able to comply with international law or ethical frameworks. There also seems to be a majority view that a fully autonomous weapon is one that can select and engage targets autonomously, without human intervention (as distinct from, for example, armed drones, which are controlled remotely by humans). It is precisely these critical functions that most states, together with the ICRC and the groups associated with Campaign to Stop Killer Robots, believe must not be left to programming and algorithms.
Only a small minority of states seemed skeptical that maintaining sufficient human control would necessitate limiting a weapon system’s autonomous functions. Not unsurprisingly, those arguing against restrictions on autonomous weapons these are the same governments that are engaged in their research and development. The US delegation, for example, argued that states and civil society must not “stigmatise new technologies” or set new international standards, but instead work to ensure “responsible use of weapons”..
The rest of the participants at this meeting, in contrast, seem keen to move ahead with delineating limits on autonomy and corresponding rules and mechanisms for meaningful human control now, as a matter of growing urgency. As the Austrian delegation said, the best way to settle the aforementioned differences in opinion on human control and critical functions is to engage in negotiations of new international law.
The majority of governments, including 26 countries that have called explicitly for a prohibition on fully autonomous weapons, have previously supported this call. The tech, academic, and scientific communities engaged in work on autonomous technologies and artificial intelligence (AI) also broadly support this call. A side event hosted by the Campaign to Stop Killer Robots on Tuesday featured three individuals who have helped organise these communities to give voice to their opposition to the development of autonomous weapons. “We keep hearing questions about what the private sector is saying,” remarked Peter Asaro of the International Committee on Robot Arms Control (ICRAC), who helped coordinate a letter by 1200 academics in support of Google employees demanding their company cancel its Project Maven contract with the Pentagon. “The private sector is asking states to take action and ban killer robots.” Likewise, Amr Gaber of the Tech Workers Coalition pointed out that tech workers are actually taking the lead in this work, but they cannot do it alone. As a self-described “low-level software engineer” with Google, he helped coordinate the letter signed 4000 Google employees who demanded the company cancel Project Maven and institute a policy against taking on future military work. Banning autonomous weapons is the first step to curbing the potential harm of technology, he argued, but states and civil society need to do more to make sure technology remains in the service of human rights, not repression and violence. “We’re tired of hearing this it too hard to solve,” he said. “We are standing up for justice.”
At the end of the day, the ask is simple. Weapons must be under human control. The alternative, as ICRC President Peter Maurer has written, is a future where humans are “so far removed from wartime choices that life-and-death decision making is effectively left to sensors and software,” and “where wars are fought with algorithms, with uncertain outcomes for civilians and combatants alike.” This is why the Campaign to Stop Killer Robots and a growing number of governments are calling for the negotiation of a legally binding treaty to prohibit fully autonomous weapons. It is also why we believe, as Amr Gaber noted, that it will be important to hear not just from governments, or even from tech workers and scientists, but also those who will be impacted by these technologies in the future. If these weapons are developed, human beings around the world will suffer. Human rights will be undermined. They will be used to repress, to harm, to kill. It is the protection of human rights and dignity that has motivated our previous work for humanitarian disarmament. It is this that motivates our work for stronger laws and norms to prevent the increasing abstraction and mechanisation of violence. A different future is possible and we have the capacity to build it.