logo_reaching-critical-will

CCW Report, Vol. 10, No. 4

Autonomous weapons and questions of ethics, control, and accountability
3 June 2022


Ray Acheson and Allison Pytlak | Women's International League for Peace and Freedom

On 1 and 3 June, the Chair of the 2022 Group of Governmental Experts (GGE) on autonomous weapon systems (AWS) convened the second of three informal, virtual intersessional discussions. This round of informal discussions focused on the topics of human control, human judgement, and human-machine interaction (HMI); ethical considerations; and responsibility and accountability. While there continues to be emerging consensus around many aspects of these topics, states still diverge over whether applicable rules and standards should be legally binding or voluntary. They also diverge over whether ethics should simply inform a legal analysis, or need to be taken into account in their own right given the extreme risks posed by these weapons to human dignity and rights.

Control vs. judgement vs. interaction

Much of the discussion about human control, judgement, and HMI centred around how these terms are defined and understood, either in the abstract or in relation to the proposals under consideration by the GGE. Participants debated whether one term is more relevant than another,  and the extent to which states need to come to an understanding about their meaning vis-à-vis taking political action.

France, for example, argued that HMI may take various forms and be implemented at various stages of a weapon’s lifecycle. In its view, human control is exercised by various institutions and through ensuring that humans are in a position to understand how a system works and interacts with its environment. China, meanwhile, proposed examining HMI from three dimensions: physical characteristics, environmental impacts, and self-learning of evolutionary capacity—all of which are interlinked, in its view. China said it found the concepts such as “meaningful human control” (MHC) that had been introduced in previous discussions as inspiring and constructive.

Chile and Mexico jointly outlined their views on human control and HMI, stressing that they do not feel it useful to discuss abstract concepts and as such have tried to identify specific elements of what they understand as being human control. Amongst other factors, the unpredictability of autonomous weapons is of concern and necessitates human control.

Australia noted that one phrase cannot cover the complexity of human control that is required across the life- and targeting cycle to ensure international humanitarian law (IHL) compliance. It outlined that the requisite level of human interaction, or human control, will vary depending on several factors. Portugal similarly noted that a range of actors are responsible for implementing policies and doctrines relating to human control and decision-making.

Stop Killer Robots shared that in its view, MHC “will be context-based, dynamic, multidimensional and situation dependent.” It outlined several relevant factors that help to determine whether a human operator can exert MHC in any given situation.

Through multiple interventions, the United States (US) referred to the joint proposal it has tabled with Australia, Canada, Japan, the Republic of Korea (ROK), and the United Kingdom (UK. It framed the paper as trying to move beyond abstract debates about terminology and focus on measures that would gain consensus. It highlighted in particular paragraphs 11(d), 20, and 29 as relevant to the discussion topics.

Several interventions responded to elements within the US-led joint proposal. France appreciated the suggested good practices contained in paragraph 20. It also said it shares the idea of making HMI the overarching theme of any outcome adopted in the GGE. India said the US-led proposal has useful elements for bringing forward work in the GGE.

Switzerland questioned why paragraph 11 is focused more on the phases of autonomous weapon design, rather than deployment and use, noting that often how a weapon is used in the “fog of war” results in it becoming indiscriminate, despite the design intention. The US acknowledged that any weapon can be misused, but said in this paragraph it has focused on weapons that, by nature, would be unlawful under IHL. New Zealand then queried if on this basis, it would then be possible to state that “weapons systems based on emerging technology in the area of LAWS are prohibited if of a nature to cause…” various effects. It suggested that it would help to clarify this to establish a general prohibition rather than a prohibition on use.

Uruguay referred to the roadmap proposal it has submitted with a group of 13 countries calling for prohibition of weapon systems that operate outside a responsible chain of human command and control. It made another call on countries to recall and support this proposal. Argentina, also a co-sponsor of the roadmap proposal, reiterated its belief in the need for a new binding instrument. It said it feels that there is a high level of commonality on the particular issue of human control, and that the difference between state positions lies in whether or not the rules should be legally binding, voluntary guidance, or a political declaration. It reminded of the responsibility of states to continue the progressive development of the CCW, noting that existing IHL does not have specific provisions to regulate or eventually prohibit weapon systems that have a certain level of autonomy. Costa Rica made similar points.

Palestine offered support to Argentina’s statement and provided an analogy about car design to illustrate why binding law is needed. Palestine stressed that we must acknowledge that AWS are already being developed now, with a “carte blanche” because of a lack of restraining rules on doing so.

Switzerland stressed that it sees a lot of commonalities across the positions of states on this topic, even if slightly different terminologies are being used. It said that human control has “crystallised as an excellent tool” to ensure conformity with IHL, for responsibility, and for taking into account ethical considerations.

Ethical considerations

Broadly speaking, states that do not support the development of a legally binding instrument (LBI) on AWS are also wary of discussions about ethical considerations of AWS. States such as France, India, Japan, ROK, UK, and US argued that ethics are enshrined within IHL and should not be discussed separately. ROK suggested that most ethical considerations about AWS can be addressed by strengthening compliance with IHL and sharing best practices, as indicated in the US-led proposal.

France said ethics and law are two distinct systems and ethics can’t be “imposed” on the legal order. The US said that ethics can be a “useful supplement” to the law, and, while it acknowledged that new law may be necessary to address uncertainty and unpredictability associated with emerging technologies, it has not yet been persuaded this is the case. The UK similarly argued that ethics can provide standards to help drive best practices, but the current legal regime is sufficient as is to address AWS.

Ireland said it appreciated the UK’s suggestion to consider applied ethics, but said it is also important not to reduce role of ethics to informing legal analysis. Switzerland likewise agreed that IHL incorporates ethics but argued that a normative and operational framework on AWS needs to include ethical considerations not yet covered by existing law, such as the ethical implications of selecting and engaging targets without sufficient human control. The International Committee of the Red Cross (ICRC) pointed out that IHL is not a static body of law; it responds to contemporary developments in conflicts, weapons, and the way weapons are used. The most useful ethical considerations are those that put ethics into practice in the service of specific limits on AWS, especially those that are designed to target human beings directly. The Philippines noted that the ethical bottom line—that machines shouldn’t displace humans in making decisions over use of force—isn’t a legal principle but something more fundamental.

Indeed, those who do support an LBI see ethics as being critical to discussions and action on AWS. Austria, Ireland, Philippines, Switzerland, ICRC, and many others see ethics as central to the driving motivation to address these weapon systems at all. As Ireland noted, the core questions of the GGE are ethical ones about how humans interact with new and emerging technology. It is not legally feasible or ethically desirable to transfer decisions or accountability to machines, noted Ireland, and these developments raise the concern that AWS could erode ethics.

Stop Killer Robots argued that the prohibitions and positive obligations it has proposed would respond to the ethical challenges raised by AWS. For example, the proposed prohibition on machines targeting people would protect human dignity and avoid algorithmic bias. Uruguay similarly argued that ethical concerns can be addressed by prohibiting the development and use of weapons to conduct attacks outside a chain of human control, weapons that cannot comply with IHL or the dictates human conscience, or that are inherently indiscriminate.

Japan agreed that the work on AWS was initiated by concerns about machines killing humans and is grounded in the ethical proposition that life or death decision shouldn’t be left to machines. However, Japan argued that the GGE has taken this into account already by agreeing that accountability can’t be transferred to machines, and that ethics is concept that can be interpreted differently based on values and cultures. China agreed that all countries may have their own ethical considerations and should take measures to articulate these within codes of conduct and best practice guidelines.

Meanwhile, ROK dismissed concerns that AWS shouldn’t be developed until ethical concerns such as impacts on human dignity can be addressed. Israel asserted that “inaccurate” discussions about ethics could possibly undermine IHL and the protection of civilians. Italy said it is not convinced that moral and ethical considerations strengthen the human element and that it might risk introducing “immeasurable” components. It also argued that considering an LBI at this moment could risk frustrating aspirations to universality.

The US reiterated its argument that weapons that select and engage targets already exist and that “no one” argues they are unethical or incompatibility with human dignity. Of course, concerns have been raised about certain existing weapon systems heading in this direction. And, as the ICRC clarified, there is a distinction between systems that detect and target missiles and those that select and target human beings.

Responsibility and accountability

Almost every participating delegation reiterated that responsibility and accountability for the use and impacts of AWS lies with humans.

For many countries, this is closely related to the concept of human control. As Germany articulated, humans are accountable for the actions of AWS and thus must retain sufficient control over them. Italy expressed concern in particular with weapon systems that can select and engage targets without sufficient human control, a concern shared by the ICRC and Stop Killer Robots, among others.

Yet, ROK expressed concern with the “excessive extension” of international law, arguing that accountability is applied differently in each country and that there is no global standard for the application of international criminal law.

Meanwhile, the US argued that it is not technology or autonomy that is problematic, but the possible uses. It purported that new technologies can enhance accountability; for example, an AWS could have system logs that record operations, which can facilitate investigations of the system’s performance and use. The US noted that specific measures in its proposal are aimed at strengthening responsibility and accountability, such as the positive and negative conditions set out in 11(c) and 11(d) are meant to avoid creating weapons that would deliberately undermine command responsibility. Building on comments made by New Zealand and Switzerland during the discussions on human control, Ireland asked why these provisions can’t be used as a basis for general prohibitions in an LBI.

Israel said human responsibility and accountability apply to the decision to use a weapon system, as well as development and deployment. China said responsibility and accountability must also apply to people who research, develop, manufacture, deploy, and use AWS. Italy said responsibility and accountability are necessary at all levels of command, including the strategic level.

Switzerland agreed that human responsibility for the use of AWS can be exercised in various ways across the weapon’s lifecycle and through HMI. Decisions over use of force and potential use of AWS remain key moments for responsibility, and human control must ensure a human user who is legally and morally responsible for effects of an attack. Autonomous functions must not be designed to conduct attacks, Switzerland argued.

The US highlighted comments from New Zealand about the risk of accidents that result in civilian casualties, rather than systems that are designed to commit violations. The US argued that this is “sometimes tragic and unavoidable” but does not necessarily imply a violation of IHL. Assessing due diligence can be difficult with new technologies as the customary standards of care might not be clear, which is why states must establish doctrines and procedures and conduct training. In response, New Zealand said there is some dissonance in the US paper within its focus on the use and design of weapon systems.

The ICRC suggested that a weapon that by design or use precludes responsibility and accountability is difficult to reconcile with principles of justice and rule of law. It recommends prohibiting AWS designed in such a way that the effects can’t be understood, predicted, or explained, including weapons where the parameters of targeting change during use, or where targeting functions are controlled by machine learning technology.

Similarly, Stop Killer Robots argued that a weapon system that autonomously selects and engages a target based on the processing of sensor information prevents the human operator from determining the specific target, location, timing, duration, and extent of the force applied. “This inevitably leads to challenges in attributing responsibility and accountability to the human operator for the effects of the attack. In the context of armed conflict, where the fog of war already makes it difficult to hold individuals to account,” Stop Killer Robots argued, AWS “would further undermine accountability for perpetrators of unlawful violence, and  would make providing retributive justice to victims even more difficult.

Ways forward

Throughout both days of discussion, states considered what is possible to achieve in the time allocated to the GGE, and where it should focus its efforts.

Stop Killer Robots said that we are now at the point where determination on the quality and extent of human control that is required can only be achieved via a process of negotiation, referring to other political processes where clarification of terms had been achieved when states move beyond conceptual discussions and begin negotiating the end product.

India disagreed, noting that while this has occurred in other process it is not of the understanding that arriving at definitions become easier because of a negotiation.

The US, UK, and Switzerland variously suggested focusing now on substance and leaving aside questions about form. Mexico expressed that substance and form cannot be strictly separated.

Switzerland suggested that one way forward is an LBI, ideally a CCW protocol, with a prohibition of AWS whose effects in attack cannot be controlled and that are without sufficient control throughout lifecycle to attribute human responsibility. Such a norm would go a long way to addressing questions of accountability and responsibility, Switzerland argued, also noting that a complimentary way forward is committing to a set of practical measures in an annex. The Philippines and Uruguay agreed there is compatibility between an LBI and good practices. Austria also favoured a LBI with a two-track approach.

Conclusion

Overall, support continues to be strong for a LBI that ensures meaningful human control over weapons and the use of force, responsibility and accountability for the effects of weapons, and strong ethical standards. Yet the countries behind the US-led proposal, and a few others, remain resistant to the development of new legal commitments that would constrain all countries equally. Instead, they continue to position voluntary measures and shared practices as the “only” practical way forward.

As with anything else, whoever, what is practical or feasible is not written in stone, it is only determined by political will. It is entirely practical and feasible to prohibit and restrict the development of autonomy in weapon systems in order to protect civilians, preserve human dignity, and promote the rule of law. Yet time and again, the most heavily militarised states assert their right to violence above the very principles and legal rules they purport to implement.

The GGE will hold one final round of informal virtual consultations ahead of its next formal session in July. On 27–28 June, the Chair will convene talks on risk identification and assessment; mitigation measures; and good practices relating to human-machine interaction. This will be another opportunity for states to advance the development of meaningful agreements to protect humanity.

[PDF] ()