1. Introduction: anthropomorphisms, autonomy and the law

This article is about the use of autonomous weapon systems (AWS) and the law of armed conflict. While ‘autonomy’ and ‘robotics’ have become ubiquitous terms, their exact definitions remain unclear. The core of the idea of an autonomous weapon system is, however, possible to define as a weapon that operates on its own in a qualified way. This implies both that the system is technologically advanced enough to sense its surroundings and act accordingly, and that it is actually left alone. How much one emphasises the one requirement or the other varies, and I shall return to the issue of definitions, but first let me provide an introduction to the title – and the topic – of this article.

The title of this article is an allusion to Genesis 1:27: ‘So God created man in His own image; in the image of God He created him; male and female He created them.’ Both the title of this paper and the quoted passage from the Bible are anthropomorphisms, namely the attribution of human features to non-human entities, whether these are God or a Robot. It is seemingly difficult for human beings to understand the world around them without attributing some human features to what they see. My computer lives its own life, my car is a hard-working old creature, my horse is cocky – but also kind –, my cat is offended and the wind is angry. If we allow the benefit of the doubt for the question whether my cat really is offended and whether my horse actually is both kind and cocky, we may state with certainty that my car is nothing but a car, it is not really hard-working, it is simply an old Land Cruiser that happens to be almost impossible to kill.

With attributes mentioned so far it is fairly easy to tell which ones are simply descriptive attributions and which ones are true characteristics – but what about autonomy? Can weapons truly be autonomous ? In this paper, I argue that weapons are not, and in the foreseeable future will not be autonomous in the human way of understanding autonomy. I furthermore argue that advanced weapon systems remain exactly that, weapons under the law of armed conflict. They do not, I argue, become human-like entities with their own will. The machine does not become a commander. Furthermore, the machine as such is not obliged to follow the law, but the human being using the machine is. Advanced machines do, however, have an enormous capacity to process data. This ability must be used under the control of human beings, yet the degree of control in time and space is not a set menu. How this control can be exercised over a system that processes information – far beyond the ability of a human brain – is a core challenge.

In this article, the question of how AWS can be used in accordance with rules is examined from the perspective of the legal framework governing conduct of hostilities, the ‘Law of Armed Conflict’ (LOAC), and not from other perspectives, such as the International Human Rights Law governing (inter alia) the right to privacy. Autonomous weapons, as any weapon, are governed by LOAC and its sub-branches governing means and methods of warfare as well as the law of weaponry – the latter containing more or less specific rules concerning disarmament. Autonomous weapons have not become subject to any specific prohibitions, either concerning their production, stockpiling or transfer, or their use. An initiative has been taken, however, by Human Rights Watch (HRW) in order to ban ‘killer robots’ and the UN has hosted a number of Expert Meetings on Lethal Autonomous Weapon Systems (LAWS) within the framework of the Convention on Certain Conventional Weapons (CCW). In the following I shall first address the different approaches to the definition of AWS, then, in sections 3 and 4 I shall address more closely the legal framework governing AWS and discuss what the law requires from the commander using such systems. Finally, I present my conclusions in section 5.

2. The world as it is and the world according to the law: what is the definition of a ‘fully autonomous weapon system’?

2.1 Unpredictable but not self-aware systems

Definitions are crucial for legal regulation. It is impossible to ban or restrain an unknown entity. Furthermore, how a leading State or the organised international community choose to define autonomous weapon systems will determine whether these weapon systems are seen as existing systems or solely as weapons of the future. Yet, a commonly agreed definition appears to be truly difficult to achieve.

In the past, treaty prohibitions governing the means of warfare have emerged after the international community witnessed a specific use, such as the use of gas during the First World War and the vast civilian damage caused by cluster munitions and anti-personnel mines. Some prohibitions have been introduced due to the potential devastating use of particular weapons (for example, explosive projectiles under a certain weight and blinding laser weapons), either with regard to the indiscriminate effect on civilians or the potential suffering inflicted on combatants. AWS have not (yet) proved devastating in use and opinions differ as to whether they are even in use. Partly because of this, the definitional challenge becomes even more strenuous.

Above, in the introduction, I have already presented a preliminary definition of an autonomous weapon as one that operates on its own in a qualified way, which implies both that the system is technologically advanced enough to sense its surroundings and act accordingly, and that it is actually left alone.

How one defines a ‘fully autonomous weapon system’ beyond this preliminary approach depends on whom you ask. There are at least three approaches to autonomy. According to the first, the concept is used in a rather wide sense, referring to the absence in intensity of human control in time and space – namely, that the system is left alone. This category encapsulates existing automatic weapons systems, from the simplest forms of pre-programmed mines with the use of trip-wires or signatures, to the most advanced automatic air-defence systems. Some also divide this broad category into two, arguing that the way a weapon is used – that it is actually left alone – is a separate category and that the true wide approach embraces systems that are able to select and engage targets without human intervention. In this article I view these as one broad category. If one uses a broad definition, this entails that the same weapon system may be both autonomous and automatic (or semi-autonomous). In cases where the system is operating under a certain degree of human control, it will be automatic, and where it is operating under less (or hardly any) human-imposed limitations in time and space, it will not. As opposed to others, I do not link the definition of autonomy to whether the system, when used in an autonomous mode, is used in accordance with LOAC. Very often, the degree of human control will determine whether the system is used lawfully. This does not, however, necessarily influence the definitional question. Having said this, some do link the definition to the fact whether the system can be used in accordance with the law – this the third category, dealt with below.

A second (more narrow) category refers to the inherent technical features of the weapon system – and reserving the term autonomy for systems that are technically advanced enough to understand a mission, that is to search for, identify and attack targets without any human operator intervening – or simply that they are able (through technology) to change their behaviour according to changes in circumstances, i.e. that they are adaptive. In the latter cases, some refer explicitly to artificial intelligence, others not. The core of the first part of the second category, however, is that none of the existing systems are (usually considered to be) capable of understanding the commander’s intent, and thus capable of being fully autonomous. A system capable of knowing its commander’s intent would know its commander very well and, more often than not, act according to his intentions. This approach leaves us with the impression that the term ‘autonomy’ is reserved for machines that operate in a way beyond human understanding. When finally understood, they are called ‘automatic’. If understood in this way, the group of autonomous systems will change in accordance with technological development.

A third category links the definition to the ability of the system to abide by international law. The following ‘working definition’ was proposed by Switzerland during the April 2016 Expert meeting on CCW: ‘weapons systems that are capable of carrying out tasks governed by IHL in partial or full replacement of a human in the use of force, notably in the targeting cycle’. Some commentators have also presented a ‘functional approach’ to AWS, leaving aside the challenge of definitions and focusing on the different challenges AWS represent for the law when looking at the different elements of the system: its munitions and platforms. This approach resembles the approach that would, in any case, necessarily have to be taken while applying the law as it is to new technology.

While the first definition suffers from challenges of being over-inclusive, the latter (third) definition suffers from being circular. The second definition understood in the UK view may be under-inclusive when referring to a ‘human level’ of understanding a mission, appearing to reserve the term ‘autonomy’ in practice for human-like robots only. In this paper, I understand autonomy according to the second view above (but not identical to the UK view), as a system’s technical ability to adapt to changes in circumstances and, due to this, its ability to search for, identify and attack targets without human intervention. This means that I exclude from the category of autonomous weapons, for example, loitering missiles which are pre-programmed to search for certain targets according to a pre-defined algorithm, and to attack these without human intervention. These weapons are, when operating without errors, predictable. Autonomous systems are, by their very nature, unpredictable. The scope of unpredictability is, however, limited because AWS do not possess self-awareness. Even though the scope of unpredictability is limited, the deeply rooted scepticism towards autonomous weapon systems appears to lie precisely in the fear of the unpredictable features of these systems. I argue here that the anthropomorphism of autonomy contributes to the fear of the systems and to our expectations as to their behaviour. Hence, because we think their behaviour is autonomous in the human meaning of the word, we fear the machines and expect similar behaviour from them as from a human being. I shall pursue the core of anthropomorphism in the following.

2.2 Autonomy and machines: putting a dress on a cow?

‘I am not a gun’ says the cartoon robot the ‘Iron Giant’, in the movie by Brad Wird from 1999, when the Giant discovers that the boy is his friend. The Iron Giant is in possession of one of the most human-specific characteristics: the ability of perception and self-awareness. That is, the ability to model a picture of the real-life world based on its own senses and, thereafter, to place itself within that picture. The ability to perceive and be self-aware is among the core abilities expected from military commanders: to be able to adjust the use of force to circumstances ruling at the time. Autonomy is thus an intrinsically human term. An autonomous person has the freedom and ability to govern itself and to act according to its perceptions of the world around. Using the term with machines entails a humanisation of the machine – an autonomous machine constitutes an anthropomorphism. By necessity, this task becomes partly strained, which in turn may contribute to the difficulties of agreeing upon a definition. Furthermore, the anthropomorphism may create an illusion that an ‘autonomous’ machine will behave just as a human being going rogue, and that the machine may decide its own enemies and create its own goals and values. I do not, however, believe that a machine going rogue in the same manner as a human being is the real issue of concern. This is why the heading of this sub-section asks whether we are ‘putting a dress on a cow’ when using the language of autonomy on machines. Instead of ‘dressing up the cow’ and expecting human-like behaviour, the focus could more fruitfully be on those characteristics of the weapon system suited to describe the system as ‘autonomous’ and, subsequently to deal with these characteristics according to the law.

The linguistic aspects of autonomy also invite us to address a deeper issue already launched at the outset of this article – do we deal with machines or men before the law? Are weapons (and systems) becoming humans – or are they simply a means of warfare? Are the weapons becoming their own users? As already noted in the introduction, I argue that machines remain a means of warfare, regardless of whether they resemble humans in certain respects. A number of other aspects of whether machines, in one way or another, become their own users and thus remove human presence are, in my view, primarily but not exclusively of ethical concern. The question ‘who are they?’ when speaking of autonomous weapon systems also relates to issues of accountability – namely, the question of where to place responsibility. Beyond what is discussed in sections 3 and 4 below, I do not address issues of accountability in this article. I do, however, address those legal assessments that presumably require a human touch. Let us, therefore, turn to the main question: what are the legal requirements that autonomous weapon systems can or cannot comply with?

3. The law governing autonomous weapons: legal assessment as a human venture

The point of departure concerning AWS is introduced above, that AWS are not subject to specific regulation – or ban. Today AWS are subject to general rules governing the means and methods of warfare (formerly referred to as the Hague Convention); that is to say, they must not be of a ‘nature to cause superfluous injury or unnecessary suffering’ to combatants, they must not be inherently indiscriminate and they must otherwise be used in accordance with the principles of distinction and proportionality, as well as the duty to take feasible precautions in attack.

Autonomous weapon systems are seldom claimed to be mala in se, such as, for example, chemical and biological weapons. Their potential for being uncontrollable is what causes concern. Limitations upon autonomous weapons ought to be achieved through the enforcement of the prohibition of means and methods of a ‘nature to cause superfluous injury or unnecessary suffering’, the prohibition of indiscriminate attacks and the duty to take feasible precautions in attack. Admittedly, some arguments may be offered with regard to the prohibition of means and methods of a nature to cause unnecessary suffering, such as the ability to recognise a surrendering enemy. It is, however, first and foremost with regard to the prohibition of indiscriminate attacks and the precautionary duties that legal challenges have been stacking up. I therefore devote the remaining parts of this article to the latter rules.

Although there is no requirement in the law that attacks must be conducted by humans (they may for example be conducted by animals such as dogs or dolphins), the application of law is not a mechanical process that can be performed by machines in its entirety. Legal norms within this field of law are highly discretionary, using words such as ‘excessiveness’ and ‘advantage’; they call upon an assessment by military commanders. This has a twofold implication: both a duty to assess for each military commander according to his/her rank and situational awareness, and a flexibility to apply the law in accordance with rapid changes in circumstances – the ‘fog of war’. To fulfil the duty to assess, there must be a human being somewhere ‘in the loop’ when attack decisions are made using autonomous weapon systems. The pertinent question is – where in the loop? This question relates to the intensity of weapons control in time and space. The question can also be reformulated to illustrate the relationship between man and machine: where and to what degree does ‘man’ have to interfere with ‘machine’ in attack decisions? Highly discretionary legal concepts such as ‘excessiveness’ and ‘advantage’ appear human in character. Usually, the question of whether an attack was launched in order to achieve a ‘military advantage’ must be determined according to a threshold of ‘reasonableness’. Simply put, a military advantage entrusted a certain weight is a military advantage if a reasonable commander under similar circumstances would consider it to be so. In turn, the notion of reasonableness has a specifically ‘perception-based’ feature. An attack decision must always be reasonable under the concrete circumstances. This latter point arguably represents an inherent challenge for machines with little or no ability to perceive. The next section turns towards ‘reasonableness’.

4. Man-made ‘reasonableness’

4.1 Artificial intelligence: the rise or fall of the reasonable commander?

As introduced above, military commanders must assess the existence and weight of the ‘military advantage’ of an attack, as well as whether the attack must be expected to cause ‘excessive’ collateral damage compared to the concrete and direct military advantage anticipated. I shall mainly use the terms ‘advantage’ and ‘excessiveness’ in the following analysis. Both terms are not only highly discretionary, as pointed out above, but also relative: ‘advantage of what?’, ‘to achieve what?’, ‘excessive compared to what?’ and so on. Furthermore, a military advantage is a highly subjective concept: advantage for whom? Consequently, those military commanders assessing the anticipated military advantage must know the aim of the military operation they are taking part in, and they ought to know the relative strength of their opposing forces. By way of illustration, let us assume that the operational objective of a phase of an operation is to establish air superiority in the area of operations. The military advantage relates to this objective. In order to create air superiority it may prove necessary to attack enemy air defence systems, command and control systems, as well as airfields. The relative importance of these targets is subject to the assessment of the reasonable commander. On the other hand, the assessment of the probability of tactical success in attacking each one of these targets by the use of a specific kind of ammunition, may be better performed by machines. Just as a machine can calculate the next move in the game of chess, predictions of enemy behaviour in the air (‘dogfights’) may be better performed by machines. The first example (the relative importance of the targets) belongs to reasoning, the latter example is a question of artificial intelligence. Weapons may be smart and even intelligent, but they do not yet possess reason in the ‘human’ definition of the concept.

At a certain stage, the law of armed conflict becomes a question of whether we can accept that human reason shall yield to intelligent weapons. Or rather, smart weapons, as they obviously lack the social/human dimension of intelligence. Does international law accept that the reasonable commander is being replaced by artificial intelligence?

We may therefore ask, as in the heading of this subsection, what does the technological development represent – a rise or a fall of the reasonable commander? Does he or she have to be even more reasonable than before to administer the advanced level of technology, or will he or she simply be out-conquered by algorithms? In the following I shall argue that commanders i) will have an even more onerous burden of reasonableness than before and ii) cannot be replaced in their performance of assessment by machines. Eventually, this is also why machines ought not to be compared to humans in this regard.

4.2 The expansive duty to take feasible precautions in attack: a matter of the commander’s choice

The reason for my position, as stated above, lies at least in part in the expansive duty to take feasible precautions in attack, to spare the civilian population, civilians and civilian objects. First, I will briefly address the ‘commander’, and then move on to the precautionary duties. The heading of this sub-section refers to the ‘commander’s’ choice, but who is he or she? Upon whom does the duty lie? The wording of treaty law refers to ‘those who plan or decide upon an attack’, which in principle addresses all levels in the military chain of command. Some assessments, though, are in certain cases presumed to be carried out on a relatively high level of command, such as the assessment of the military advantage anticipated from an attack as a whole. As the example above shows, the attack as a whole may encompass the coordinated effort to achieve air superiority. In these cases, the military advantage is presumed to be assessed on the operational level of command and reassessed on tactical levels of command if circumstances change.

What about the precautionary duties of the commander? Commanders involved in both planning and/or conduct of an attack shall take all feasible precautions to:

  • verify that the target is a military objective – that is, that by its nature, location, purpose or use, it makes an effective contribution to military action and that its destruction, capture or neutralization will offer a definite military advantage;

  • refrain from and eventually cancel or suspend an attack if it becomes apparent that the attack may be expected to cause excessive collateral damage compared to the concrete and direct military advantage anticipated (the proportionality rule).

I shall address each of these two interrelated duties in some more detail. The duty to ‘verify’ that the target is a military objective invites the obvious questions: what does it mean to ‘verify’, which measures are ‘feasible’ in order to achieve verification, and how much of the verification process can be left to a machine – to an autonomous weapon? In simple terms, ‘verification’ means ‘to prove’ that the target is a lawful target, but the rule must not be confused with a duty to be certain. Nor is it a general requirement of ‘eyes on target’, as some doubt will always exist. Using the qualifying term ‘feasible’, the precautionary rule addresses the measures required in order to be as certain as possible under the circumstances. Obviously, one cannot verify having found a military objective unless one knows what the objective looks like. Verification therefore presumes recognition of some sort.

The duty to suspend or cancel attacks is closely linked to the duty to verify that the target is a military objective – with the additional requirement of observing the proportionality rule, namely that an attack shall be cancelled or suspended if it becomes apparent that the target is not a military objective, or that it may be expected to cause excessive collateral damage. The law, therefore, requires some sort of estimate of collateral damage. The duty encompasses an assessment of what collateral damage is reasonably foreseeable. This is a duty that obviously requires situational awareness through human perception. Furthermore, if one attack goes wrong, corrections may be expected according to the assessment of the damage (the so-called battle damage assessment – BDA).

The process of recognition can be feasibly done in accordance with the law by an automated or an autonomous weapon, if the parameters to match are inputted into the weapon in advance – to hit an object with a known signature. Problematic situations may arise if the parameters are very few, given that errors in recognition may occur, or that the weapon operates on its own over a considerable time, or travels over long distances, or a combination of these. These challenges are familiar to all highly automated weapons, such as with the use of unmanned aerial vehicles (UAVs or ‘drones’), combat systems on frigates (such as AEGIS, used by the Norwegian Armed Forces), as well as sea mines. Autonomous weapons, understood in the narrow sense as referred to above, pose additional challenges. If the autonomous weapon system has learned to recognise objects that have not previously been programmed into it, from its own observations – i.e. it is trying to achieve a level of perception – it may attack objects which have not been chosen by humans, but which ‘fit’ what the weapon system has learned about other objects. For example, the system knows that tanks have tank turrets and tracks, and the autonomous system is able to use this information even if observing the vehicle from another angle than in the picture loaded into it. The autonomous system can add and process data beyond what is predicted by the human operating it. Such a system may prove demanding to operate by military commanders, not knowing exactly what information (data) the system will choose to collect and rely upon. Having said this, the use of autonomous processes in weapon systems may not necessarily produce more targeting errors during attacks. On the contrary, high-level technology may provide commanders with more comprehensive and up to date information about the target and the target area, creating a higher level of situational awareness. Furthermore, a number of errors are also likely to happen with humans in or on the loop, controlling the information loaded into the weapon and deciding for the system what the threshold is for verifying that an object is a military objective (a lawful target). The pending issue appears, rather, to be which errors are expected vs. which errors are accepted. An error in verification due to incorrect data may be acceptable as accidental – provided that the error appeared reasonable according to the circumstances. For example, civilian use of – or presence within – a compound expected to be a military training camp, and only this, may not have been reasonable to expect. If, on the other hand, the error of attacking an object that proved to be civilian was done by an autonomous system, was that error reasonable ?

Two points shall be made in this regard: first, that the legal environment does not appear to be equally forgiving for mistakes made by machines as by humans, simply because it is counter-intuitive to label machines as reasonable. On the contrary, they are, by nature, unpredictable and the potential for systemic errors appears less consistent with the law than tragic accidents. Second, the human will never be completely ‘out of the loop’. This formula rests on a relative premise, namely that the human is far away (in time or space) from the attack decision. Nevertheless, recalling that the weapons system is a means of warfare and not an actor in warfare, a human being will always be accountable. After all, it will be a human decision to use the system against the category of target, within the area of operations, with ammunition, and so on. No matter how much autonomy a weapons system may eventually possess, the ultimate control of the weapon will expectedly rest with the reasonable commander. He or she will have to rise to the expectations of advanced weaponry, in order to comply with LOAC. He or she will have to know when to reassess the legality of the attack and thus to adequately interact and, when necessary, intervene on the autonomous system in time or space. As expectations of accuracy and precision increase, a parallel expectation of situational awareness arguably arises. For example, the law may arguably require a high degree of communication between the missile launch area and the forward air controller in the target area. It should be pointed out, however, that a high degree of communication does not necessarily mean very intense or extensive communication; it may imply sharing time sensitive information at the right time – for example, on changes in the circumstances in the target area.

5. Conclusions

In this article I have put forward two inter-related main arguments: first, that machines – due to their technological construction – cannot be characterised as autonomous in the human meaning of the word; and second, that the point of departure still remains – namely that relevant targeting assessments required by the law of armed conflict must be taken by human beings. The law does not state, however, how these assessments are to be carried out in practice. Furthermore, and on a more abstract level, the application of law to the employment of so-called autonomous weapon systems is made difficult because we label them autonomous. The attribution of human features to the machine (anthropomorphism) creates an expectation that the machine can perform human-like assessments. This expectation ought to be rebutted. We should not truly attribute these human features to the machine.

The general rules on precautions in attack do not seem to allow the reasonable commander to be replaced by automation or autonomy – human presence is required. The crucial issue appears to be where in the ‘loop’ the presence is required, and with what intensity. In short, high-tech weaponry requires highly skilled management. The diversity of autonomy in weapon systems and accompanying information gathering systems does not give room for a statement for or against the compliance with the law of armed conflict. Autonomy in weapon systems must be carefully used in order to comply with the law. Errors made by machines are arguably less tolerable if representing potential systemic errors than errors made by presumably reasonable commanders. Military commanders are therefore, arguably, expected to overrule autonomous systems anywhere precautionary duties so demand. This point does not, however, give reason for stating that autonomous systems as such are unable to be used in accordance with the law of armed conflict.

The attribution to them of human features does not turn autonomous weapon systems – the machines – into humans. On the contrary, the demands upon the operators appear to increase in equal step as the technological development moves forward. It is likely that there will be strict demands upon the human-machine interface in order to make advanced future technology fit within the law. So, Man created Robot in his own image, and they both have to work out their friendship – not only to look themselves in the mirror. In the end, it appears to be of paramount importance that the machines are not only properly controlled by humans, but also properly tested in all their autonomous functions and in how these interact with the human in the loop – in other words, meaningful human control.

  • 1
    The author is acting Judge Advocate General for Norway (on leave from the position as Associate professor in international law and the law of armed conflict at the Norwegian Defence Command and Staff College). The article is written in her personal capacity and does not reflect the position of the Norwegian military prosecution authority. The author would like to thank her colleagues at the Norwegian Defence Command and Staff College: Kerstin Larsdotter, Bjørn Gunnar Isaksen, Morten Andersen and Bård Ravn for valuable input to an early draft of this article. She would also like to thank the anonymous referee for valuable comments.
  • 2
    Genesis 1:27 (BibleGateway), https://www.biblegateway.com/passage/?search=Genesis+1%3A27&version=NKJV accessed 6 August 2018.
  • 3
    LOAC applies in armed conflicts, either international or non-international. International armed conflicts are governed, partly, by the four Geneva Conventions (GCs) as well as their first Additional Protocol. See Geneva Convention for the amelioration of the condition of the wounded and sick in armed forces in the field (GC I) (adopted 12 August 1949, entered into force 21 October 1950) 970 UNTS 75, Geneva Convention for the amelioration of the condition of the wounded, sick and shipwrecked members of the armed forces at sea (GC II), (adopted 12 August 1949, entered into force 21 October 1950) 971 UNTS 75, Geneva Convention relative to the treatment of prisoners of war (GC III) (adopted 12 August 1949, entered into force 21 October 1950) 972 UNTS 75, Geneva Convention relative to the protection of civilian persons in time of war (GC IV) (adopted 12 August 1949, entered into force 21 October 1950) 973 UNTS 75 and Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the protection of victims of international armed conflicts (Protocol I, hereinafter AP I) (adopted 8 June 1977, entered into force 7 December 1978) 17512 UNTS 1125. See Common Article 2 to the four GCs and Article 1 of AP I. Non-international armed conflicts are governed, partly, by Protocol Additional to the Geneva Conventions of 12 August 1949 and relating to the protection of victims of non-international armed conflicts (Protocol II, hereinafter AP II) (adopted 8 June 1977, entered into force 7 December 1978) 17513 UNTS 1125, see Article 1(1). See also Common Article 3 to the four GCs.
  • 4
    See for example Bonnie Docherty, ‘Losing Humanity: The Case Against Killer Robots’ (Human Rights Watch, 19 November 2012) https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots accessed 28 August 2015 and International Committee of the Red Cross (ICRC), ‘Autonomous Weapons: States Must Address Major Humanitarian, Ethical Challenges’ (ICRC, 2 September 2013) http://reliefweb.int/report/world/autonomous-weapons-states-must-address-major-humanitarian-ethical-challenges accessed 8 October 2017.
  • 5
    Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May be Deemed to be Excessively Injurious or to Have Indiscriminate Effects (and Protocols) (as amended on 21 December 2001) (adopted 10 October 1980, entered into force 2 December 1983) 1342 UNTS 137. The annual UN meetings concerning the CCW Convention have dealt with autonomous weapons in 2014 and 2015 as well as in 2017. See The United Nations Office at Geneva, 'Background on Lethal Autonomous Weapons Systems in the CCW' https://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument accessed 6 August 2018.
  • 6
    Michael C Horowitz, ‘Why Words Matter: The Real World Consequences of Defining Autonomous Weapon Systems’ (2016) 39(1) Temple International Law and Comparative Law Journal 85, 85.
  • 7
    For example, see Convention on Cluster Munitions (adopted 30 May 2008, entered into force 1 August 2010) 2688 UNTS 39 and Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction (adopted 3 December 1997, entered into force 1 March 1999) 2056 UNTS 211.
  • 8
    See Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight (St. Petersburg Declaration) (signed 29 November/11 December 1868), Parliamentary Papers 1869, LXIV 659. The rule in the St. Petersberg Declaration is repeated in The Hague Conventions of 1899 (II) and 1907 (IV) respecting the laws and customs of war on land and its annex: Regulations concerning the Laws and Customs of War on Land (hereinafter Hague Regulations) (signed 29 July1899/18 October 1907, entered into force 4 September 1900/26 January 1910). Also see Additional Protocol to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (Protocol IV, entitled Protocol on Blinding Laser Weapons) (adopted 13 October 1995, entered into force 30 July 1998) 22495 UNTS 2024.
  • 9
    Horowitz (n 6) 90.
  • 10
    An example of this is the US definition, ‘weapons that, once activated can select and engage targets without further intervention by a human operator. This includes human supervised weapon systems that are designed to allow humans to override operation of the weapon systems’: see United States Department of Defense (US DoD), ‘Directive Number 3000.09 Autonomy in Weapons Systems’ (21 November 2012). In the same direction, see the working definition chosen by Norway: ‘weapons that would search for, identify and attack targets, including human beings, using lethal force without any human operator intervening’. The latter definition was presented at the CCW conference (2016), pointing out that it is a working definition, not a proper legal definition. Document on file with author.
  • 11
    Horowitz (n 6) 92.
  • 12
    For such an approach, see the third category accounted for by Horowitz, ibid, 93.
  • 13
    This is the approach taken in British doctrine: ‘Autonomous systems will in effect be self-aware… as such they must be capable of achieving the same level of understanding as a human… as long as it can be shown that the systems logically follows a set of rules or instructions and are not capable of human level of situation understanding, they should only be considered automated. See Ministry of Defence (MoD), ‘Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems’ (30 March 2011).
  • 14
    See for example David P Watson and David H Scheidt, ‘Autonomous Systems’ (2005) 26(4) Johns Hopkins APL Technical Digest 368, 368.
  • 15
    Horowitz (n 6) 89 (referring to a ‘narrow construction’ where AWS are distinct from the weapons today, excluding rare exceptions).
  • 16
    Informal meeting of experts on lethal autonomous weapons systems (LAWS) Geneva, 11-15 April 2016, Towards a “compliance-based” approach to LAWS, Informal Working Paper submitted by Switzerland 30 March 2016 https://www.unog.ch/80256EDD006B8954/(httpAssets)/D2D66A9C427958D6C1257F8700415473/$file/2016_LAWS+MX_CountryPaper+Switzerland.pdf accessed 6 August 2018.
  • 17
    Horowitz (n 6) 95-97.
  • 18
    Iron Giant, director Brad Wird, Warner Bros (1999).
  • 19
    Robert Sparrow, ‘Twenty Seconds to Comply: Autonomous Weapon Systems and the Recognition of Surrender’ (2015) 91 International Law Studies 699, 705.
  • 20
    In the same line of thought, see Christopher P Toscano, ‘“Friend of Humans”: An Argument for Developing Autonomous Weapons Systems’ (2015) 8 Journal of National Security Law and Policy 189, 193.
  • 21
    See ‘anthropomorphism, n.’ in Oxford English Dictionary (1st ed., Oxford University Press 1885).
  • 22
    Toscano (n 20) 197.
  • 23
    In this direction, see Characteristics of Lethal Autonomous Weapons Systems Submitted by the United States of America (US Working Paper) to the Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, CCW/GGE.1/2017/WP.7 (10 November 2017) where autonomy in weapon systems is emphasised, rather than autonomous systems.
  • 24
    The question is, among others, raised by Jo Sannem and Eirik Skøyeneie, ‘Jussens treghet og teknologiens femmilssteg’ in Tor Arne Berntsen, Gjert Lage Dyndal and Sigrid Redse Johansen (eds), Når Dronene Våkner (Cappelen Damm Akademiske 2016) 109-129.
  • 25
    In the same direction, see US Working Paper (n 23) paragraphs 13 and 25.
  • 26
    St. Petersburg Declaration (n 8). The rule is repeated in the Hague Regulations (n 8), where the English translation of Article 23(e) reads that it is forbidden: ‘To employ arms, projectiles, or material calculated to cause unnecessary suffering.’ The debate (relating to the French and English texts) concerning the differences between ‘superfluous injury’ and ‘unnecessary suffering’ and between ‘calculated to cause’ and ‘of a nature to cause’ is now obsolete, subsequent to the adoption of the provision in AP I (n 3) Article 35(2), where it is stated that: ‘It is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.’
  • 27
    Codified in AP I (n 3) Articles 51(4), 48, 50 and 52(2), as well as Articles 51(5)(b) and 57.
  • 28
    Such weapons are prohibited in the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on their Destruction (CWC) (adopted 3 September 1992, entered into force 29 April 1997) 1975 UNTS 45 and the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on their Destruction (adopted 10 April 1972, entered into force 26 March 1975) 1015 UNTS 163.
  • 29
    See for example Noel E Sharkey, ‘The evitability of autonomous robot warfare’ (2012) 94(886) International Review of the Red Cross 787, 788.
  • 30
    William H Boothby, The Law of Targeting (Oxford University Press 2012) 120.
  • 31
    See Sigrid Redse Johansen, On Military Necessity and the Commander’s Assessment of Military Necessity under the International Law of Armed Conflict During Conduct of Hostilities (Reprosentralen, University of Oslo 2017) chapters 2 and 4.
  • 32
    See Johansen, ibid section 4.6 and Andrew D McClintock, ‘The Law of War: Coalition Attacks on Iraqi Chemical and Biological Weapons Storage and Production Facilities’ (1993) 7 Emory International Law Review 633, 644-645, where McClintock argues that the acquittal of Rendulic in the Hostage Case, ‘reflects the deference given to decisions made while the commander is enshrouded in the ‘fog of war’.’
  • 33
    See for example Jeroen van den Boogaard, ‘Proportionality and Autonomous Weapons Systems’ (2016) Amsterdam Center for International Law Research Paper 2016-07 31 and Michael A Newton, ‘Back to the Future: Reflections on the Advent of Autonomous Weapons Systems’ (2015) 47(1) Case Western Reserve Journal of International Law 5, 18.
  • 34
    See Thomas Slensvik and Sigrid Redse Johansen, ‘Missilangrep og folkeretten – ute av syne ute av sinn?’ in Tor Arne Berntsen, Gjert Lage Dyndal and Sigrid Redse Johansen (eds), Når Dronene Våkner (Cappelen Damm Akademiske 2016) 219-244.
  • 35
    LCDR David H Lee (ed), Operational Law Handbook, (International and Operational Law Department, The Judge Advocate General’s Legal Center and School 2015) 12 (footnote omitted). The same position is upheld in the US Department of Defence (US DoD) Law of War Manual (2015, as amended 2016), Department of Defense Directive 2311.0lE, DoD Law of War Program, 195-197. See also ratification statements with regard to the AP I, by Australia, in Adam Roberts and Richard Guelff, Documents on the Laws of War (3rd ed. Oxford University Press 2000) 500, Austria, in ibid, Belgium, in ibid 501, Canada, in ibid 502, Federal Republic of Germany, in ibid 505, Ireland, in ibid 506, Italy, in ibid 507, The Netherlands and New Zealand, in ibid 508, Spain, in ibid 509 and the United Kingdom in ibid 510.
  • 36
    See US DoD, ibid 199 and AP I (n 3) Article 52(2) referring to the ‘circumstances ruling at the time’.
  • 37
    For a more in depth introduction to these questions, see Johansen (n 31) section 13.1.
  • 38
    See ibid section 13.3 as well as the US DoD (n 35) 244 and Michael Schmitt, ‘Asymmetrical Warfare and International Humanitarian Law’ (2008) 62(1) Air Force Law Review 1, 28: ‘Ultimately, no objective means of valuing either incidental injury/collateral damage or military advantage exists. Instead, it is the subjective perspective of the party carrying out the proportionality assessment that matters.’
  • 39
    This position is based on the fact that the advantage relates to the ‘attack as a whole’, as specified by a number of ratifying states with regard to the scope of AP I (n 3) Articles 51 and 57. See UK Declaration of 28 January 1998. Similar declarations are posited by Australia on 24 June 1991, Canada on 20 November 1990, Belgium on 20 May 1986, Germany on 14 February 1991, Italy on 27 February 1986, The Netherlands on 26 June 1987, New Zealand on 8 February 1988 and Spain on 21 April 1989. The notion of the ‘attack as a whole’ is also reflected in UN General Assembly, Rome Statute of the International Criminal Court (last amended 2010) 17 July 1998, Article 8(2)(b)(iv).
  • 40
    This duty is laid down in AP I (n 3) Article 57 and its core represents customary international law. With regard to the latter, see for example US DoD (n 35) paragraph 5.11 for an illustration of what applies as a minimum for States not parties to the AP I.
  • 41
    See for example UK Ministry of Defence, The Manual of the Law of Armed Conflict (Oxford University Press 2005) 85, paragraph 5.32.9 and Brian J Bill, ‘The Rendulic “Rule”: Military Necessity, Commander’s Knowledge, and Methods of Warfare’ (2009) 12 Yearbook of International Humanitarian Law 119, 144.
  • 42
    See Johansen (n 31) with regard to the notion of the attack as a whole in general. With regard to the specific level of command under concrete circumstances, see Michael Bothe, Karl Josef Partsch and Waldemar A Solf, New Rules for Victims of Armed Conflicts: Commentary on the Two 1977 Protocols Additional to the Geneva Conventions of 1949, (2nd ed., Martinus Nijhoff Publishers 2013) 409 where it is argued that: ‘In a coordinated military operation, the relative importance of the military objective under attack in relation to the concrete and direct military advantage anticipated is not a matter which can be determined by individual tank leaders, the commanders of lower echelon combat units or individual attacking bomber aircraft. If assigned a fire or bombing mission they must assume that an appropriate assessment has been made by those who assigned the mission. Thus, in this situation, the decision to cancel will have to be made at the level where the decision to initiate the attack was made.’
  • 43
    In this direction, see Claude Pilloud, Yves Sandoz, Christophe Swinarski and Bruno Zimmerman (eds), Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949 (Martinus Nijhoff Publishers 1987) 681, paragraph 2197 and the Program on Humanitarian Policy and Conflict Research at Harvard University, ‘Commentary on the HPCR Manual on International Law Applicable to Air and Missile Warfare’ (Cambridge University Press 2010) commentary 4 to rule 32(a).
  • 44
    This summary is based on, but not quoted, from the AP I (n 3) Article 57(2)(a)(i) and (iii).
  • 45
    For example, see Michael Schmitt (ed), Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (2nd ed., Cambridge University Press 2017) 475, commentary 12 to rule 113, commenting on the proportionality rule, emphasising that ‘Expectation and anticipation do not require absolute certainty of occurrence.’
  • 46
    In the same direction, see Boothby (n 30) 121.
  • 47
    For a more thorough account of the same position, see Johansen (n 31) paragraph 14.3.1.
  • 48
    A similar position is taken by the ICRC in ICRC, ‘International humanitarian law and the challenges of contemporary armed conflicts’ (ICRC, October 2015) https://www.icrc.org/en/download/file/15061/32ic-report-on-ihl-and-challenges-of-armed-conflicts.pdf accessed 6 August 2018, 52.
  • 49
    See Slensvik and Johansen (n 34) 227.
  • 50
    ibid.
  • 51
    An example where the views on the matter differ is the US bombing of the Al-Firdus bunker/shelter during the Gulf War in 1991. For the different views, see US DoD, Conduct of The Persian Gulf War: Final report to Congress (US DoD 1992) 617 (where it is argued that the attack was lawful and necessary) and Human Rights Watch, Needless Deaths in the Gulf War: Civilian Casualties During the Air Campaign and Violations of the Laws of War (Human Rights Watch 1991) 137 (where the legality was questioned on the basis of what information could reasonably be expected to be possessed).
Copyright © 2018 Author(s)

CC BY 4.0