logo_reaching-critical-will

AWS Diplomacy Report, Vol. 2, No. 2

Editorial: It’s Time to Write the Rules on AWS Before Algorithms Write Them For Us
15 May 2025


Ray Acheson | Reaching Critical Will

Download full report in PDF

The informal consultations on autonomous weapon systems (AWS), held at the UN in New York on 12 and 13 May 2025, provided an expansive consideration of many legal, humanitarian, security, technological, and ethical dimensions of these weapons. The consultations were intended as a space to enlarge the scope of discussions on AWS from what has taken place at the Group of Governmental Experts (GGE) at the UN Geneva. They were also oriented toward offering a chance for delegations not represented at the GGE to learn and engage. The consultations fulfilled both objectives, with at least 96 states and many civil society groups participating actively to offer views and raise questions needing further examination.

Most of the states and civil society groups, along with many of the expert briefers and representative of various UN agencies, strongly supported the development of a legally binding instrument on AWS. Several supported an outright ban on weapon systems that can target human beings or that cannot be operated with meaningful human control. This reflects the growing convergence at the GGE in support of a treaty that prohibits, restricts, and regulates AWS. The UN Secretary-General and President of the International Committee of the Red Cross (ICRC) have jointly called for the elaboration of such a treaty by 2026. This call was amplified and supported by several participants at the international consultations, with Amnesty International urging states to adopt a resolution at the UN General Assembly establishing negotiations next year.

“The lessons of history must guide us,” said Sierra Leone’s Minister for Foreign Affairs and International Cooperation during the opening session. “When weapons are developed without restraint, they are eventually used without restraint.” And, as the Ambassador of Costa Rica warned, “When machines become arbiters of life and death, humanity itself is placed at risk.”

The President of the International Committee of the Red Cross (ICRC) likewise cautioned, “If this technology is allowed to develop and be deployed unchecked, we are accepting a world in which machines can choose who lives and who dies. A lack of decision-making and political will today to regulate these weapons under international law will condemn future generations to live with the consequences.”

Centering human rights and dignity

This is why most states active at the GGE and these informal consultations are intent on developing global, binding rules for AWS. As Algeria said, these weapons pose “fundamental threats” to the international legal order, which requires immediate collective action.

Each session of the consultation focused on a specific thematic issue. Across each, a clear focus on human rights and dignity emerged, highlighting the importance that states, activists, academics, international organisations, and others give to these considerations in relation to AWS.

The emphasis on human rights and ethics was an important addition to the international humanitarian law (IHL)-based conversations at the GGE. In this context, the consultations also gave new space for delegates to consider positive obligations for an AWS treaty. Several states and civil society groups put forward concrete suggestions, ranging from victim assistance and environmental remediation provisions, to measure to prevent algorithmic bias, to ensuring gender perspectives and diversity.

The elaboration of concerns about the use of AWS in policing and border enforcement, and the potential harm to marginalised communities such as women, LGBTQ+ people, Indigenous Peoples, migrants, and people of colour, was also essential. Malawi warned that the use of algorithms in AWS would be a “catalyst of wanton killing propelled by hate based on skin colour, gender, race.” It is this understanding that has compelled many civil society organisations to mobilise against autonomous weapons, particularly in the Stop Killer Robots campaign.

The centrality of human rights to AWS cannot be overstated. Human rights apply both during and outside of armed conflict; AWS are likely to be deployed not just by militaries in war but also by police, at borders, and by other agents of state violence and control. Given the rising suppression of protest, freedom of expression, and right to assembly and association around the world, and ongoing violence of policing globally, it is imperative to address AWS in multiple contexts. During the consultations, Amnesty International explained several of the key human rights challenges posed by AWS, noting:

Many of the technologies underpinning AWS, e.g. remote biometric recognition, and predictive analytics, are inaccurate, biased, and in some instances incompatible with international human rights law by design. Designed on the back of data obtained through mass surveillance and undergirded by well-documented instances of bias by the scientific community, these technologies risk exacerbating existing human rights harms associated with these “component” technologies. AI-powered technologies could fuel mass violations of human rights.

In addition, Privacy International raised important concerns about what data is being fed into AWS or how it is acquired. Data-driven systems that are necessary for the use of algorithms and artificial intelligence (AI) mean that data is being given to military or law enforcement actors without consent; the use of this data to target humans means that this data comes back to harm people, threatening human rights, dignity, and privacy.

Many states articulated specific concerns around the use of AI in AWS, highlighting issues around algorithmic bias and technological limitations that make weaponising AI incredibly dangerous. As Costa Rica said during the opening session, algorithmic bias “risks encoding our worst human prejudices into the very systems that would make life-and-death decisions.”

Sierra Leone noted that the increasing integration of AI in armed conflict, particularly in regions already experiencing acute humanitarian suffering, is accelerating warfare with diminished human oversight. “Errors made by autonomous systems are not only tragic,” said Sierra Leone, “they are dangerous, potentially sparking escalations that no human intended, and no system can control.”

Amnesty International used the example of automated decision-making systems in the context of fraud detection to illuminate the potential harms of AI and algorithmic bias in AWS, while the International Committee for Robot Arms Control (ICRAC) noted that generative AI systems, which are likely to find their way into the wider AWS environment, are known to hallucinate—they give false or misleading outputs that are difficult to distinguish from accurate results. ICRAC warned that “this behavior is guaranteed by its technical architecture and these types of errors can only be managed not eliminated.”

These and the other concerns expressed at the consultations indicate that the most effective way to prevent harm and human rights violations is to outlaw AWS, particularly those that can target human beings.

A few delegations argued that all that is needed to address AWS is “faithful implementation” of IHL. And some panelists offered problematic solutions to the challenges posed by AWS; for example, the United Nations Institute for Disarmament Research (UNIDIR) suggested that the solution to algorithmic bias is building even bigger, more “inclusive” datasets—inevitably then giving more data to militaries and police without people’s consent—or asking AI companies to “fix” their algorithms. In contrast, however, most participants—states and civil society, as well as other UN entities—were clear that the only solution that respects human rights and dignity is to ban all AWS that can target humans.

The UNGA “versus” the GGE

A few delegations were hesitant to embrace what they see as a competing path to ongoing work at the GGE. Some of the states that have been systematically blocking the adoption of substantive or meaningful outcomes at the GGE objected to any “parallel” workstream on the issue of AWS, including Israel, the Republic of Korea, Russia, and the United States. Australia delivered a statement on behalf of 21 states arguing “that a process outside the GGE would take us backwards rather than forwards.”

But most participants highlighted the complementary nature of the informal consultations, which provided space for an expanded discussion in support of work at the GGE. Moreover, many delegations pointed out that the GGE has so far failed to make the kind of progress needed to adequately confront the urgent issue of AWS.

Brazil noted the overall “diminishing culture of consensus” in multilateral fora where states act as if they have a unilateral veto over substance and even agendas. This culture has impacted the ability of the GGE to move beyond a discussion mandate to a negotiating one, despite the fact that the majority of participants support the development of a legally binding instrument.

While recognising the role of the GGE, Egypt noted the pace of technological development has far exceeded diplomatic action. It pointed out that the GGE’s current mandate has been in play for almost a decade and asked if participants are trying to address a moving target with an outdated mandate. In this context, Costa Rica warned, “The window for preventive governance is closing faster than our diplomatic processes are moving.” Stop Killer Robots raised a similar concern, pointing out, “The rapid proliferation of autonomous capabilities in weapons systems has profoundly altered the strategic landscape since the inception of international discussions in 2013, dramatically heightening the stakes and exacerbating the risks associated with continued inaction on the regulation of autonomous weapons systems.”

Banning autonomous weapons

The consultations were thus future-focused, with most participants demanding urgent action to prevent further harm to human beings, international law, and global peace and security. As UN Secretary-General António Guterres said in a video message to the meeting, “We are living through deeply dangerous and divided times, and we don’t have a moment to lose.”

Costa Rica similarly noted, “Many of us are concerned that the current international environment is characterized by a decline of multilateralism and of waning fidelity to international law. Here we have a clear opportunity to demonstrate that preventive governance is not only possible, but essential—that the United Nations can act with foresight rather than hindsight.”

It is clear that most states want a ban on autonomous weapons. So does the world. In its overview of the 29 submissions to the UN Secretary-General’s report on AWS received from international and regional organisations, the International Committee of the Red Cross, civil society, the scientific community and industry, Stop Killer Robots said that across these diverse stakeholder voices, the common core principles are:

  • Meaningful human control must be preserved over critical functions of weapons systems.
  • Accountability cannot be automated. Machines do not bear legal or moral responsibility; people do. Any deployment of lethal force must be traceable, explainable, and justifiable under international law.
  • International humanitarian and human rights law must be upheld. No system, autonomous or not, should be permitted to operate in a manner that violates international law.
  • Ethical lines must not be crossed. To allow a machine to make life-and-death decisions, is to strip human beings of one their most fundamental right—the right to be recognized and treated as moral agents.
  • We need a legally binding instrument. Voluntary norms and non-binding declarations could be useful interim steps, but they are not enough. The time has come to negotiate and adopt a treaty that prohibits and regulates autonomous weapons systems.

If these consultations, and the past decade of work at the GGE, are to have any meaningful effect, states now need to establish and conclude negotiations for a legally binding instrument on autonomous weapon systems. As Costa Rica pointed out, “History has consistently shown that it is far more effective to ban or regulate weapons technologies before they are deployed by militaries at scale. In the case of autonomous weapons, we must write the rules before the algorithms write them for us.”

[PDF] ()