logo_reaching-critical-will

15 May 2014, Vol. 1, No. 3

Editorial: Law and morality
Ray Acheson | Reaching Critical Will of WILPF


 

Download full PDF

Wednesday’s discussions pitted philosophers against legal scholars in a debate about the morality and legality of autonomous weapon systems. While the philosophers highlighted ethical challenges posed by turning over decisions about the life and death of human begins to machines, the legal scholars argued that autonomous weapons might be more effective in protecting human beings. They also claimed that such weapons are not “inherently illegal” and that existing international law is adequate to regulate the use of such weapons. However, as several delegations highlighted during the ensuing discussion, the fact that a weapon is not necessarily inherently illegal does not mean it is necessarily lawful or moral to develop or use.

“Morality is a process,” argued Professor Peter Asaro of the International Committee for Robot Arms Control (ICRAC). Moral decision-making is not just about choosing what action to take, but choosing what perspective from which to take it and what kind of world you want to live in. He noted that a machine would not make decisions based on these factors. The inability of a machine to engage in moral reasoning—to consider the implications of an action in relation to the value of human life—means that they should not be programmed to make decisions about the use of force.

Arguing the opposite perspective, Professor Matthew Waxman suggested that autonomous weapons would handle this responsibility more objectively. Such weapons, he argued, would be more likely to avoid abuse, violations, and outrages “that can only be committed by human beings” because of emotions. But the crux of all three legal scholars’ arguments was that autonomous weapons, since they do not yet exist and thus cannot be reviewed, cannot be preemptively declared illegal. They seemed to argue that until such weapons are proven to inherently violate IHL, they must not be subject to prohibition.

This attitude is extremely shortsighted. It precludes preventative action, even though, as several delegations argued, including Japan and Sweden, it is questionable that autonomous weapons could comply with IHL. It also assumes that weapons reviews—of existing or future technologies—are infallible. However, on the basis that the level of autonomy of a system has an inverse relationship to its predicitability, it is hard to see how fully autonomous systems could be effectively reviewed for legality.

In this regard, Article 36—an NGO named after the weapons review requirement—has suggested that a better legal framing for autonomous weapons could be that the principles of humanity require meaningful human control over the use force: “We cannot simply rely on a determination of whether or not such weapons can comply with rules on distinction, proportionality, or precaution. The principle of humanity can be seen to require deliberative moral reasoning, by humans, over each individual attack decision.”

[PDF] ()