CCW Report, Vol. 10, No. 5
Building a foundation to protect humanity
30 June 2022
Laura Varella and Allison Pytlak | Women's International League for Peace and Freedom
On 27 and 29 June, the Chair of the Group of Governmental Experts (GGE) on autonomous weapon systems (AWS) convened the third and final informal, virtual intersessional discussion. This round of informal discussions focused on the topics of risk identification and assessment, mitigation measures, and good practices relating to human-machine interaction (HMI). The first two topics were mainly covered during the meeting held on 27 June and the latter on 29 June, although some delegations provided reflections or responses on all topics across both meetings. On both days delegations also offered views on the way forward for the GGE, in preparation for the second formal session to be held 25-29 July 2022 in Geneva.
Risk identification and assessment
France highlighted paragraph 25 of the joint proposal submitted to the GGE by the United States on behalf of five other countries (“Principles and Good Practices on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems”) which establishes that “risk assessments and mitigation measures should be part of the design, development, testing, and deployment cycle of weapons systems based on emerging technologies in the area of LAWS”. It further noted that paragraph 12 requires “inter alia, that human beings make certain judgements in good faith based on their assessment of the information available to them at the time.”
The United States (US) stressed that risk assessment is an important subject linked to the development of emerging technologies. It noted that autonomous features have been used in weapons for many years and that, if used properly, can increase precision, and reduce the risk of civilian casualties in operations. The US also pointed out that there were also several past conclusions adopted by consensus in the 2018 and 2019 GGE reports, which indicated different types of risks. It argued that it is useful to compile the outcomes of these reports, because it allows states to find them in one single document and work to implement them. The US asserted that in its joint proposal, it included the concept of “unintended engagement”, which was a concept elaborated during the 2019 GGE conclusion and that can address risks related to attack on civilians or unintended military objects which is a concern from many actors in the international community.
Austria recognised the importance of some of these key terms, including “unintended engagement”, but it refuted the argument previously made regarding the increase in precision and accuracy provided by AWS and a possible benefit to international humanitarian law (IHL). Austria argued that using AWS would not lead to more precision, but instead more generalised decision-making in targeting. It would not mean more accuracy, but instead a one-size-fits-all approach, which is not ideal. Austria also asserted that it is crucial that we understand exactly which the risks associated with AWS are, and not just speculate about them, because we risk missing important elements. It argued that risks are important and that states should enumerate them and include them in a report or document to be approved this year.
Austria highlighted that there are risks that states have not touched upon in the past, such as when autonomous weapons operate without control at all, including when they have self-learning systems that are not following intending programmes anymore. It argued that no military wants to use weapons they cannot control, and that this would not only violate IHL, but also lose its military utility.
Ukraine suggested that the risks could be categorised in levels, such as low, very low, high, critical, etc. It asserted that every technical system has probable risks, and that when defined under levels, can be better assessed and afterwards mitigated.
Austria highlighted that a lot of work has been done in this topic, as it is possible to see from the 2019 GGE report. However, the list of risks is not exhaustive and there are others that have been raised by civil society and the International Committee of the Red Cross (ICRC). Austria concluded by stating that it makes no sense to talk about solutions when there is no clarity on the risks and challenges associated with these weapons. This point was echoed by Switzerland, who argued that it is important to understand the risks before jumping to mitigating measures.
Pakistan also agreed that the discussion about risks has been ongoing for many years and highlighted that the topic was covered during the Fifth Review Conference (RevCon) of the Convention on Certain Conventional Weapons (CCW), almost seven years ago. It argued that the risks are well known and are increasing, and in Pakistan’s view, there is a lack of progress on means to address the issue. It argued that it is necessary to agree on multilateral rules as a crucial major step—unless the GGE creates the means to address the security dimension of these weapons systems, we won’t be able to fully address the risks. Similarly, Stop Killer Robots highlighted that discussions on the topic have been happening for many years. Due to the seriousness of the risks, it is necessary to adopt a legally binding instrument that clearly lays out prohibitions and limits.
Stop Killer Robots argued that the development and use of autonomous weapons involves a wide range of risks, including compliance with international law, ethical principles, and risks to international peace and security. Regarding the risks associated with international law, it mentioned the potential failure to comply with the principles of distinction, proportionality, and precaution, in addition to the failure in comply with international human rights law (IHRL) obligations that includes the right to life, prohibition of inhumane or degrading treatment, privacy, and discrimination. It also noted the potential failure to comply with international criminal law, especially regarding accountability. On the risks associated with ethics, Stop Killer Robots mentioned the dehumanisation of humans into data, the prejudicial effects of algorithmic bias, and several others. Regarding the risks associated with peace and security, it mentioned the increase in arms race, the increased risk of conflict, lower threshold to the use of force, the prospect of widespread proliferation of this system, the lack of clear prohibition and regulations, and others.
The ICRC reiterated its understanding that there is a core issue from which all risks flow, which is the process by which AWS functions. The user does not select the specific target, nor the precise time or location in which force applies, and this means difficulties in anticipating and controlling the effects of the use of those weapons. It argued that those difficulties bring risks for civilians, injured soldiers, for commanders responsible for applying IHL rules in specific attacks, in addition of ethical concerns. It argued that from its perspective, these risks that need to be addressed by a prohibition on certain autonomous weapons, and restriction of others. It argued that it is eminently practical now to agree and to establish specific constraints on autonomous weapons on all circumstances, including on what they are used against, where they are used and how they are used. It argued that it is necessary to address the legal concerns, humanitarian risks, and ethical concerns associated with these weapons.
The United States argued that the ICRC’s understanding of AWS as inherently problematic, is not necessarily the view of others. The US noted that its joint proposal tries to recap past conclusions and advance understandings on characteristics of these weapons. It suggested that discussing characteristics would be a good way forward, which was echoed by the United Kingdom (UK).
Article 36 argued that although the risks have been discussed for a long time, it is very important to have a set of categorisations of all concerns and issues that we are responding to. It argued that limitations on the geographical scope, duration, and other elements should be framed under legal rules, instead of simple risk mitigation measures. It further argued that even if they are open-ended, or approached on a case-by-case basis, it is still necessary to approach them as legal rules.
During the meeting held on 29 June, the US responded to the points from Article 36 regarding “unintended engagements”. It said that reducing accidents is a practical problem and that if we want to understand how to do so most effectively, then we need to examine good practices in using emerging technologies and automation.
In an intervention delivered on 29 June, the UK noted that discussion of risk in the GGE has been largely about the risks of artificial intelligence (AI). What has not been spoken about as much are the risks posed by too much regulation or a legally binding instrument, and the impact this can have on the ability of AI to better support the application of IHL.
Japan stated that paragraph 29 of the US-led joint proposal, which Japan supports, incorporates risk assessment and mitigation measures. It highlighted that the paragraph on this in the proposal includes language that was agreed by consensus, and encompasses possible civilian harms. It asserted that mitigation measures can be incorporated throughout the lifecycle of the weapon system, and that it is important to share good practices and risk mitigation measures, while taking into consideration security risks in each country.
France also commented on the same paragraph of its joint proposal and added that there are other measures that could be considered from the 2019 GGE report, including those highlighted on paragraphs 37, 38, 40, and 41. As one example, paragraph 40 stated that risk mitigation could include measures that control how the system can engage, the geographical scope, the role of the human operator, and the deactivation of the weapon system.
The UK stated that the same joint proposal to this GGE, which it also supports, would be informed by knowledge and experience of all participants within the GGE and CCW, maybe even industry and academia. It explained that it sees risk mitigation being taken forward best by including more informed people in the discussion to build understanding and mitigate risk.
Pakistan stated that the term “risk mitigation” does not do justice to the security dimension, because countries have different perceptions on risk. It argued that it is always important to address the issue in a holistic manner and particularly address the various dimensions of AWS.
The US argued that its joint proposal outlines several measures to mitigate risks. They are divided under four categories laid out in paragraph 29 of the proposal. It argued that its proposal differs from others because its language is based on past consensus. It further noted that it is framed under the concept of “unintended engagement”, which is broader than just risk to civilians, as it also includes harm to medical personnel, possible escalation of conflict, and other elements. It argued that a lot of discussion, in addition to case-by-case analysis, needs to be done in order to implement these measures. It also highlighted that another difference is that its proposal includes measures to prevent bias in advancing autonomous features, and although it is not aware of bias in current weapons systems, it is important to address that.
Stop Killer Robots referred to the statement it delivered earlier in the week which set out a broad range of risks arising from the development and use of AWS. To mitigate these risks, a legally binding instrument that includes a combination of both prohibitions and regulations is needed. Stop Killer Robots recommended two types of prohibitions, one on systems that cannot be meaningfully controlled and one on systems that are designed to target humans. It stated that non-binding principles and practices would permit disparities in national interpretation and implementation of these principles, and would lack enforceability if the rules were not applied and effectively fail to mitigate the risks identified.
Good practices relating to human-machine interaction (HMI)
Austria observed that even there is not yet much good practice to share in this area, it is important to have this topic on the agenda, stating that if there is good practice then states should benefit from knowing about it.
Germany referred to the working paper it submitted with France in 2021, which stressed the importance of HMI as a key debate in the discussion on AWS because of the role it plays in compliance. As outlined in their paper, weapon systems need sufficient human control to be retained throughout the entire lifecycle, making it indispensable that commanders and operators understand the weapon system well. France referenced this paper as well.
Germany acknowledged that the US-led joint proposal has a comprehensive list of good practice on a broad variety of aspects of the GGE’s discussion. It noted that it might have been helpful to include a few others as well.
The US spoke to the section of its joint proposal relevant to this topic, stating this is an important area of work for the GGE. The joint paper has several good practices in the area of HMI contained within it, eight of which are drawn from previously adopted GGE documents and four others that are new. The US explained that by listing good practice, the paper demonstrates how disparate strands of work within the GGE are connected, in fact.
The UK supported these points, and further stated that it believes that human-machine teaming is the best way forward. It has produced papers on this subject including recently an AI strategy, and would have a lot of expertise to offer.
The US responded to the argument that good practice is insufficient and new rules are needed, by agreeing with that point but explaining that it does not see good practices as the whole solution but would be helpful to advance understanding about what new standards should be developed. It would encourage discussion about implementation of guidelines contained in the joint proposal in future meetings.
Stop Killer Robots outlined its views on HMI, which it said is one element of the range of factors that should be considered in determining whether a human operator is capable of exercising meaningful human control over an AWS. While the quality of HMI needed will depend on the nature of the AWS being used at a minimum, the human operator should be able to interact with the system in order to limit the time and space of its operation, and be able to disable the system to prevent the use of force if appropriate to ensure compliance with international law and ethics. Stop Killer Robots also listed other relevant factors for assessing meaningful human control. It reiterated that determination on the precise legal and technical parameters relating to human control that form part of the final framework will be best achieved through commencing a process of negotiation of a legally binding instrument (LBI).
New policy developments
The Netherlands announced that it has recently finalised its national position on autonomous weapons as based on extensive research and advice of the joint committee of the Advisory Council on International Affairs (AIV) and the Advisory Committee on International Law (CAVV). The new Dutch position supports the view that AWS which cannot be used in accordance with IHL should be prohibited and that those which can be used in accordance with IHL should be regulated. When announcing this at the GGE, the representative said that this “two-tied” approach has been suggested before in the GGE and it finds it a useful way to structure discussions. The Netherlands will further elaborate its position in the July substantive session.
China announced it has a new position paper on AWS that will be submitted in advance of the July GGE session. While the representative did not have the document on hand in order to describe it in detail, he highlighted three issues relevant to the discussions in the informal meeting. His first point related to scope of discussion: that there are different understandings about what kind of a weapon is being discussed and also that there are different types of autonomous weapon systems, which pose differing humanitarian risks. It has proposed a concept of “acceptable AWS” and “unacceptable AWS” in a past position paper. China’s second point focused on formula: while China supports the GGE negotiating a LBI it is also aware that the GGE is divided on this. As such it proposes to prohibit “unacceptable AWS” through a new CCW protocol and then focusing work in the GGE on appropriate ways to regulate acceptable AWS without excluding the need to negotiate another LBI. Finally, China highlighted the dual-use nature of AWS and the importance of respecting the right of all states to technological progress.
Further work of the GGE
States discussed possible ways forward and how the work of the GGE should be carried on in the future. While most of this surfaced during the meeting held on 29 June, some points were made about this on the 27th. For instance, the US argued that the GGE needs a vehicle to progress the work and that its joint proposal is the right vehicle. It reiterated the argument made in previous meetings that states should consider what can be realistically achieved this year and that its proposal is a possible outcome. It argued that its proposal can be a basis for further GGE work, and that it does not preclude further work on the topic.
Japan also highlighted aspects of the joint proposal and said it is necessary to proceed on discussions of what can be agreed this July, at the second session of the GGE.
Austria asked the US and its partners for clarification on how it would see the future work of the GGE. Austria expressed understanding of the idea to move quickly, but would like to know how this would be linked to the GGE’s future work and the other proposals. The US responded to say that it is important for states to build a strong foundation, “brick by brick”. It stressed that it is necessary to have a good foundation for future work because these issues will be with us forever and require ongoing attention. It argued that this is a very complex issue, and that narrowing differences in a “brick by brick” approach is important. It highlighted that it wants this issue to remain in the CCW, and that states need a framework to provide a platform for discussions to move forward.
Switzerland argued that many states share the view that the there is an urgent requirement for the international community to address the particular risks and challenges posed by AWS. It highlighted that many states consider this an important part in the context of “effective and multilaterally agreed rules”, quoting the joint proposal that it supports.
Switzerland built on the “brick by brick” point of the US when it expressed its belief that adopting rules would be “one brick among many, a brick that accompanies specific restrictions, a brick that accompanies specific limits”—for instance, some states advocate for more flexibility and propose instruments other than legally binding rules. Despite these differences on process, Switzerland highlighted that all states should undertake risk assessment and mitigation measures in the lifecycle of AWS, regardless if in the context of binding rules or not.
Pakistan reinforced its position that clear legal rules are necessary and nothing short would be sufficient. It asserted that under the CCW, states can either regulate weapons or prohibit them, and that in its view, an instrument on the subject cannot be used as a vehicle to legitimise the use of new weapons, as this is not the purpose of the CCW.
There was further interactive discussion about the way forward during the meeting held on 29 June.
China stressed that the Chair’s paper to be produced for the July session should incorporate all suggestions and proposals in a balanced way. The Philippines agreed and said it could not accept a working method that would be based on only one proposal and not others. Both emphasised the need for a Chair-driven process.
Japan expressed it is glad about the progress that has been made in terms of the number of proposals, also considering the constraints of time and the informal nature of these meetings. In light of these constraints, but also the progress which has been made, Japan said it would be sensible and reasonable to compile the proposals that have been made during the GGE’s meetings.
Japan also shared that while it has co-sponsored the “principles and good practices” proposal, the document is not exclusive and it is happy to see there are other proposals on the table.
The Philippines said it agreed with what is in the US-led proposal on voluntary measures, and that commencing a process for a LBI while also streamlining voluntary measures does not have to be mutually exclusive. It observed that the GGE now has enough elements to start discussing an LBI based on a general prohibition that also states in general terms there is a need to regulate AWS. Voluntary measures could be annexed to an LBI, suggested the Philippines. Austria supported these points in general terms, urging the Chair to be ambitious in the July meeting and observing that there is now momentum in the process.
The US acknowledged and recognised the work that this Chair has done throughout the process, and that the GGE has had its best successes over the years when its chairperson has been encouraging and given good opportunities for states to work. However, the US also registered disappointment at what it saw as low levels of energy and participation from amongst GGE participants, and concern about the number of interventions that have identified its proposal as one that should not give any precedence. In this regard, the US recalled the mandate of this GGE to consider proposals. It also said it would not be fair to look the Chair as a substitute for full participation from states.
In response, the Philippines clarified that a “Chair-driven process” does not imply any abrogation of a state’s role as an active delegation. In its view, the barometer should not be set at the level of participation in the informals, but rather the volume and depth of written submissions. The Philippines highlighted that some delegations have limitations on what level of engagement they can have in an informal setting, and that others may not have capacity for full engagement, or that their engagement has been done in the context of written inputs rather than oral statements.
Aotearoa-New Zealand expressed its hope that, if a state is not speaking much, it is not also perceived as not listening. Importantly, Aotearoa-New Zealand pointed out that there is a deeper question at play during these informal meetings which is not about substance but about the relationship between the substantive elements and what they will be anchored to, in terms of an overall outcome. For example, the US and others see the substance as being anchored to voluntary measures which New Zealand wouldn’t necessarily not get behind, but it has a preference for rules and limitations. It suggested that because positions are evolving, as the statements from China and the Netherlands shows, there is a need to move discussions into a formal context with the whole of the GGE.
For his part, the Chair spoke often during the meeting on the 29th about his role and vision for the way forward. While he has some power in his role, his legitimacy is derived from the “truthfulness” which he will bring to the Group. He said he will try to translate the gist of the proposals and discussions, but his doing the “heavy lifting” on this does not mean he will do it all on his own and that the GGE members have the ability to redirect whatever he puts into his paper. The Chair said he is prone to be daring in this work because he believes there is a need for new norms, but that there are some key questions to consider in relation to the quality of a norm and also its effectiveness: is it better to have a good norm that is followed by not very many states, or a very bad norm that is followed by all?
At the close of the meeting on 29 June, the Chair said he is planning to have intensive bilateral consultations with several delegations in the next two weeks and hopes to have a draft report available in mid-July or before the session. He hasn’t decided yet on the format of the report but envisages it to be simple and objective.
The informal intersessional discussions have been a useful way to maximise time between formal sessions and dig into the content of the various proposals that have been submitted to this OEWG. But they have also had their limitations, as some delegations noted on Wednesday, and more generally it feels as though the GGE is approaching a crossroads. As Aotearoa-New Zealand noted, the substantive content of discussions is ultimately tied to the form the discussions will ultimately lead to, and be anchored in.
There is now significant support for creating a LBI that would ensure meaningful human control over weapons and the use of force, as well as provide for responsibility and accountability and set out high ethical standards. But there are also voices who reject the establishment of new legal commitments and prefer voluntary measures alone.
In a report from the first intersessional meeting held in April, this publication outlined that there are two paths forward emerging. With the formal GGE session less than one month away, we ask now: what bricks will form the foundation of that path? Will it be a path that protects the interests of some of the most militarised states or will it be one that protects humanity?