logo_reaching-critical-will

CCW Report, Vol. 13, No. 2

Disarmament is Always the Smartest Choice 
11 March 2025


Laura Varella | Reaching Critical Will, Women's International League for Peace and Freedom

Download in PDF

On 3–7 March 2025, the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS) met in Geneva for its first session of the year. Unlike in previous meetings, which some could say are generally marked by a feeling of impending doom, this session felt a bit more hopeful. That was due to the high-quality discussions that delegations were able to have, particularly around human control. The rolling text proposed by the Chair included the term “context-appropriate human control and judgment,” which prompted discussions between those who wanted to keep the term or similar formulations in the draft—the majority in the room—and those who wanted to delete it.

Delegations were encouraged to justify their positions, instead of just reiterating them, and interacted with other states’ remarks. This dynamic allowed for better clarity around states’ understandings of human control and its relationship to international humanitarian law (IHL). On Thursday, when the Chair proposed a few guiding questions to unpack the topic, the GGE had one of its most in-depth debates in recent years. By being encouraged to react spontaneously, delegations were able to better reflect on their own understandings and their countries’ positions and see how different they are from other states. While divergences clearly still exist, the feeling in the room was that at least now delegations have more clarity about the reasons behind each other’s positions. And as expected, some of the reasons proved to be quite problematic.

Narrowing the scope

While the interactive exchange was positive, it also covered the same ground as exchanges earlier in the GGE’s history. Those who have been working in the GGE since its beginning will recognise many of the same debates about terms and approaches, making it feel like we are going in circles in this forum.  As in previous years, some states approached the debate around the characterisation of LAWS as an exercise of trying to narrow the scope of discussions. One of the ways they did this was by insisting on keeping the element of lethality in the characterisation of autonomous weapons—and defending restrictive interpretations of this term.

Russia argued that the concept of lethality, “as a very minimum,” requires ending lives. While it recognised that it could also include “damage to the health of humans,” Russia said that damage to objects would be beyond the scope of the concept. Singapore defended an even more restrictive interpretation, suggesting the definition, “lethal weapons mean weapons that are designed to kill.” Israel, the Republic of Korea, and the United States (US), among others, also supported other restrictive interpretations of “lethal”.

The understanding that LAWS would be weapons systems that kill or injure people, not encompassing damage to objects, was strongly objected by many delegations. As Norway emphasised, “If we were to focus only on weapons that are designed to kill humans, we would effectively be characterizing LAWS only as anti-personnel systems and, as the ICRC pointed out, IHL addresses more than just that.” 

In addition to the insistence on “lethal” and the restrictive interpretations of this term, the second way by which some delegations tried to narrow the scope of the characterisation of autonomous weapon systems was by defending a cumulative approach to the tasks assigned to such systems, such as to “identify, select, and engage” targets. That is, they argued that the system would have to act autonomously across all these functions. In contrast, several states pointed out that adopting a cumulative definition would exclude a whole range of systems that should be under consideration. Many suggested that the Group should adopt a broader working characterisation of autonomous weapons.

Listening to the debate, it felt as if some delegations forget that these discussions are not a mere theorical exercise but are supposed to result in an instrument that can protect real people from the very harmful impact of AWS. As Switzerland pointed out, “We would be very sceptical if everything that comes out of our work would be restricted to a narrow scope, notably if we had a scope that is based on a cumulative definition and the limitation to lethal systems.” 

Saying that a narrow scope of characterisation of AWS would generate a loophole is an understatement. If only anti-personnel weapons are considered, and among them only those that are able to perform all three tasks of identifying, selecting, and engaging “targets” in an autonomous manner, we would be facing a massive regulatory gap. Are states really comfortable with the idea of living in a world where certain autonomous weapons, maybe even most of them, are used without any regulation whatsoever?

Persistent myths

The lack of awareness about the risks posed by autonomous weapons—or the illusion that some would be immune to them—was also present during the debate around human control. As mentioned earlier, having an in-depth conversation about human control was positive; some could even say they’ve been waiting for this debate for over ten years. But during the discussion, arguments in favour of autonomous weapons showed that some delegations maintain questionable assumptions about these technologies, despite the decade of discussions in which other states, the ICRC, and civil society have provided arguments and data to the contrary.

The US, for example, said that autonomous technologies can be used “to create smart weapons that can be used with greater precision and less risk to civilians and civilian objects,” and that there would be nothing unethical or illegal about this. “If a computer can be used to make a decision on a battlefield that saves lives, my delegation believes it is more ethical and more consistent with the law to use that computer rather than to let civilians die, because at least then decisions about life and death won't be given to machines.” The US also said that states need to recognise that “autonomous functions and weapon systems can be used to save lives and strengthen the protection of civilians. We mustn't privilege the dumb weapons of the past over the smart ones of the future.”

There are some assumptions in the US’ remarks that merit challenging. The first one is that automation and artificial intelligence (AI)-enable technologies result in smarter systems. An example against this is the warning by experts that large language models being commercialised to militaries have been giving “worthless” advice. In the US context, there are also several concerns about how the Silicon Valley model of “move fast and break things” applied to the military sector results in a culture of overpromises and myths about possible technologies. 

Historically, similar arguments about increased precision or accuracy made possible with the used of new technologies have been debunked. The rich body of evidence against the alleged precision of drones, which have resulted in thousands of civilians casualties worldwide, is an example.

When we look at how automation and AI are being used in warfare and policing today, there are many examples of how they tend to exacerbate harm. The use of systems like Lavender by Israel in Gaza, which was instrumental in the killing of civilians and destruction of civilian infrastructure, is a case in point. Additionally, examples of digital dehumanisation via systems like facial recognition demonstrate that AI-enabled technologies are discriminatory. In a capitalist, patriarchal, colonial, and racist world, autonomous weapons are one more tool of violence and oppression.

The US delegation’s focus on alleged positive aspects of AWS, instead of the very real harm similar technologies are already causing, is frustrating. It is especially concerning because of recent developments in the country, where the new US administration has fired its top IHL military lawyers because it doesn’t want people who “attempt to be roadblocks to … anything that happens.” This is a good example why we need strong international legally binding rules—and why we need them urgently.

Looking ahead 

The GGE should continue its work building on the discussions held this week. The Group’s mandate is to submit a report to the Seventh Review Conference of the CCW in 2026, but it also tasks the Group to completing its work as soon as possible, preferably before the end of 2025. As reminded by the Chair, this is less than 10 months and only five meeting days from now. “The urgency of our work cannot be overstated, and we must act decisively and with a cooperative and flexible spirit,” he emphasised. 

Delegations should make use of all opportunities to continue conversations ahead of the next session of the Group, scheduled to take place 1–5 September. One such opportunity is the informal consultation taking place on 12–13 May in the United Nations in New York.  This will be a great occasion to discuss critical issues that remain missing from the current debate at the GGE, including ethical considerations, human rights, anti-personnel AWS, proliferation, use of AWS by non-state actors, international criminal law, use of AWS by domestic law enforcement, and environmental concerns.

As the UN Under-Secretary-General and High Representative for Disarmament Affairs, Ms. Izumi Nakamitsu, said in a video during the GGE this week, “Your deliberations here could pave the way to shape the future of warfare in the way that upholds the principles of international humanitarian law, human rights, and humanity.” 

Advocating for what could be loopholes in regulation is a conscious choice to privilege weapons over humanity. To privilege profit and power over the wellbeing of populations worldwide. The smarter choice here is to join others working to create an effective and comprehensive treaty that can protect humanity from autonomous weapons. 

 

[PDF] ()